Thursday, April 3, 2014

For translators, the missing leg.

I briefly wrote about some of the goodies for the translators and the dismantler thingy has been a debt to pay, while I'm not going to formally cancel the debt I do plan to show and explain what is it today :)

This is a rather primitive but illustrative view of what the dismantler is about


We can easily have an html view of any of the dialogs along with the text extracted, what for? copy the text straight from the page.
I find it rather primitive that in many cases what a translator gets is a raw and crude text file filled with strings, one typical problem is that they lack 'context' information and it is easy to make mistakes and transliterate instead of translate.
However getting a zip file that they open in the browser with the flow of the application (as shown in previous posts) they can then click on the desired dialog and get this view, even more interesting could be that they can edit the text directly in the html page to send then the translated text.
Many things can be done along those lines, this is just a teaser of what else is possible and how Murphy goes along the whole flow of application development and testing.

Now for the sad news... I've been quite busy lately writing new code that it is intended to start taking over with more production quality oriented things, more docs, unit tests and all the bells and whistles required for serious use, however my activity on murphy has dropped quite some points in my priority list due to many different reasons so updates wont come that often in the future, hope you understand this is something that is mostly done in my spare time and to fully realize it it needs quite serious invest of time, as there is life after murphy (and other interested experiments to do too) I appreciate your patience and comprehension.

One more! Tomorrow Ohio!
(March 31- April 4, Cleveland - Ohio, https://sites.google.com/site/icst2014/home)

Hope you drop by at the TAIC conference , if you happen to bring a beer for me give it to Pekka and specifically tell him that is for me or he'll drink it right away :)

-Mat

Wednesday, March 26, 2014

The journey of the experiment, what's ahead

What's new? nothing, what's coming? a lot.

To begin with some serious experimental usage will be put in place for Murphy (some really secret stuff so I wont tell) which translates into various needs.

First of all experimental code needs to be translated into production quality code, I've been quite busy with it lately with this long and painful transition, last 3 evenings spent into rewriting the whole graph library. The new one looks quite ok and the algorithms fits nicely, short and concise (your building blocks are really important) and testability is quite good (yes, nosetest based test cases will start appearing in the source tree)
I did take a bit of time to entertain myself with some cool operator overloading (yes, is not a cursed word you know), here's a peek into how it looks:

graph = Graph()
node = graph.new_node()
edge = node.new_edge("edge 1")

dot = graph.as_graphviz() # we can use graphviz for a nice looking graph
as_dict = graph.as_dict()
graph = Graph.from_dict(as_dict)
#easy serialization, generated dicts are not recursive
json.dumps(as_dict)

#Get a path between 2 nodes
a_path = graph['Node 1'] >> graph['Are you sure you want to quit?']

#go from current location to the given node
graph.traveller >> graph['Installation Finished']
(In case you're still thinking >> roughly translates to 'to') 

#create a node that represents the state of the world as it is now
node2 = graph.new_node(from_world=True)

The whole graph library with the crawler it is in itself a general purpose problem solver, with a bit of extra layers it can be used for some really interesting stuff, I'm really looking forward to have some time to play with that (outside of the scope of Murphy,of course).

The user interface is screaming for help, even after I spread requirejs to prevent global namespace pollution it is in a desperate need of a framework, I'm still undecided, current options are between angularjs and emberjs but there are many and I'm not very familiar with them.
Back then, the first UI was done with Tkinter, oh boy... that was painful, then it was a simple notepad based html page but now there are many bits and pieces and needs a good componentization as it is getting out of hands.

And, I am in debt with the 'Dismantler' thingy, I have yet to truly show what it is about, the basic idea of it is to decompose the UI elements in a friendly way for translators, but of course I have yet to find the time to do an experimental demo of it.

Then again, debt must be paid, refactoring and writting test automation code will continue for several days longer.

-Mat


Friday, March 21, 2014

Freshly baked version and some hard stuff

Yez, the github code was updated today with latest and greatest and hopefully not too broken new version, I still have in my todo list the versioning, I promise next time I commit code I'll do it :)

In any case, let's move forward with some really hardcore stuff, it is probable quite safe for you to not to read beyond this point as i'm going to go very technical, but I MUST write this at least for myself.

One of the most important parts of murphy is the ability to crawl the application and make a map or a model out of it, I had split the problem into 4 stages that go from quite simple and most useful with a notable increase in complexity in each stage.

Before going into the stages we need to first need some basic concepts, to begin with the model is a directed graph, each node represents a state while each edge the transition into another state, that's trivial but we're doing a graph of the UI of an application.
In this case a node represents then the state of the UI, in a windowing environment that usually means a window or dialog, edges will then represent what I can do with that dialog (pressing buttons, enter text, tick checkboxes, etc)
Things are not THAT simple, let's see a simple dialog:


If we take literally the image as a state, then each combination of pressed / unpressed check boxes, radios and so will explode the number of different states and images to store, to reduce that the state is parametrized and so the image too, we can then recognize from a screenshot comparing against our parametrized image, a parametrized image looks like this:


It is then a matter of finding an image inside a screenshot of the whole screen, the red areas are ignored during the comparison (as their content can change)
Without going deeper into how and why (or i'll spend the whole night explaining) that is more or less the basics of recognizing the different states.

As we are interested in doing a human readable statechart or graph of the flow this is good enough for most cases. Let's move on to the crawler and the different stages then.


Stage 1: Big splash

The simplest crawler tries to visit as many dialogs / states as possible with as little work as possible, at this stage we're mostly interested to have a bird view of the application, to do so the crawling uses a very simple process, again some simple flow (in text mode this time):


While this strategy gives us fast results we can see that certain things tends to be confusing, in the center of the graph we can see the 'Cancellation Confirmation' and a mess of arrows around, not even all the paths are valid, if you come from the 'Welcome To SuperApp' state you cannot then go to the 'No__Select Installation Directory', it is an invalid flow.
Separating that state requires a more advanced crawler.
This is the crawler that is at the moment in github.


Stage 2: More details but not exhaustive

As we step up to the challenge we want something nicer and we're willing to spend a bit more of machine time, this crawler can decouple this kind of 'reusable dialogs'.
Internally what it does is to associate a state with a context, if there's some invalid case like the above one (from the welcome, press cancel, then no_select installation directory) it opens up as a separate state, now the graph looks like this:


Now we're talking... all paths are valid (or almost, for most cases), as a convention you can see that in the text the __Blah... is a disambiguation of where does it comes from, this is useful when we write manual scripts (one day I may write about that)
Context means (in the context of this discussion) the path we travel to arrive to such node, more specifically only the predecessor node/edge, as we're still under a manageable complexity we only care about the immediate before node.
(And I am happily and proudly showing you the first results of the soon to come version :))


Stage 3 and 4: when hell freezes

This ones are a real pain, we step up the challenge significantly and now the graph is not closed anymore, at this point we care about all the nodes / edges that affects the possible next step. Wha???
Yes, let me translate that, lets suppose that the 2nd node in the graph has 1 checkbox, it can be on or off, and depending on it the 8 node may vary, detecting the flow dependencies beyond the immediately before node. Context is now the full path we travel!

Heavy stuff, and will come, but the plan is to get there gradually since most of the benefits are collected with simpler crawlers there's no hurry to get to stage 3 and 4.
However, the purpose of the stage 3 and 4 is to allow super cool functionality in the workbench, you will then be able with the mouse to instruct testing / exploration in an extremely powerful way, you could then for example do the following things:

  • Get a quick graph (of stage 2 quality)
  • Select areas of the grap
  • for each area you could specify if you want exhaustive exploration (try many different ways to reach the same state)
  • loop areas, for example try a series of values in a field, for example a collection of valid and invalid inputs

And so on, you get the idea right? the potential is really interesting.

But time is up and post is long, so I'll be on my way.

-Mat

PS: I warned you it was going to be heavy stuff...




Thursday, March 20, 2014

A crash but gentle video introduction to murphy.

The promised video has arrived, crash introduction in 4:11 minutes what murphy is about.


(for better resolution watch it at http://youtu.be/zUYmzYI_pvY and put the settings / quality to a high value, better yet if you let me know how to put high res videos in blogger :) )

Some of the things you see in the video are not yet in the source code at github, I still need to check nothing major got broken, some features you see there are in early and experimental stage, you know, the usual disclaimer.
More info to come as soon as I push the changes to github, until then I hope you enjoy the video

-Mat

Thursday, March 13, 2014

Quick update and introducing 'Da skinator'

Yeap, been few days already, but quite hectic ones, I did rushed a few small yet handy features but I wont write about them yet as I'm also working on a small video that shows them and some of the known stuff, hopefully within the next few days will be ready. And yes, spend quite some time paying code debt and sanitizing some code in the web workbench, work is not finished but it was getting out of hands already.

So what's that 'skinator' thingy? well, before going into that you have to pronounce it right, it is like the old Addams Family pinball when it says 'the mamushka', so sounds something like 'Daaa Skiiiinaator'!

Ok, now we know how to say it lets get into the explanation, sometime ago, about a year or so It occurred to me that because the scraper component in Murphy disassembles the user interface I then have a lot of information on how to reconstruct it, so what good is that?
Well, product customization is not a rare thing, co branding and stuff that modifies the original application and produces a custom version of the application has it's uses, too abstract so far? ok, let's look at a screenshot


The popup window shows a reconstructed version of the UI from the information the scraper got from the application (please be gentle, I spend just 2 hours today to do it so it is rather crude at the moment)
You could try out how would it look if you change the font size, colors and so, even cooler, as you watch the whole flow you could see how much of the application would look if you change for example the palette.
The point is that it would allow you to quickly preview mass changes in your application without the need for recompiling or even running it to see how it would look, for that it could then be zipped and sent to the customer for getting early feedback.

It is not a trivial thing to do but it is not that difficult either, anyway I don't have any immediate plans to move forward that functionality but wanted to show other kind of things that are possible to do once we have the model of the application.

Hope next post is about the video being ready.

-Mat


Thursday, March 6, 2014

Thursdays are no wednesdays but still...

I know, I said tuesday or wednesday but it actually happened today, an update of the source code in the git repo.

I got some feedback already about a couple of minor glitches to fix, and the updated I pushed today has some of them already, there's no changelog neither versions yet, it is all 'trunk', sorry for that as I'll work it out in the coming days.

However, I managed to squeeze a nice small addition, you can now have a live view when murphy does it's stuff, here's the shot:


So, in the top-right corner there's a small preview window which constantly updates with what is happening in the virtual machine, useful for debugging / troubleshooting mostly. I'd like in the near future to also integrate it with the other functionality, meaning when you request a machine in a specific state you could see it working it's way to it in the 'Live' window. Probably even taking control of it right from the browser without the need to launch an external client, is not much to do as I have all the building blocks spread around.
The live view also introduced 2 nice things: requirejs (oh boy, how messy javascript can be...) and a small windowing library for html-javascript, it is something I've been poking around some time ago, while html is great for some things it sucks for others so it helps me a lot to have it here. Yez, you  can resize, move the window around, close it and so.
Before I forget, the live preview is disabled by default until it has enough testing, but you can easily turn it on with a parameter in the base_extractor.py

Where are we heading nowadays then? well, first I'd like to get some feedback on the disk / network / registry capturing feature and more particularly how to show the information better, it is quite crude and raw at the moment.
Then? ah, start harnessing the possibilities of that information, meaning capture what happen when the network get's cut and the application tries to use it, sounds simple, but hey, if I cut the network then murphy stops receiving the images and sending the commands! so, it is not so trivial to do and may require a few attempts at it.
Then? arggggg, need to pay back code debt, fix some of the old code which is in bad shape, but that's an ongoing thing in background anyway. I will start converting things as to use requirejs on the javascript side and the inclussion of the pebble library.

Speaking of which, I'll explain a bit better what I *tried* to say in the last post:

>>I wonder if it could be simplified as much as to not to even have to use the get(), that'd be awesome.

What I mean is, that the following code:

@process()
def do_job(foo, bar=0):
    return foo + bar

t = do_job(x, y)

returns a task object, so you must do then t.get()

However, with python is possible to return a proxy object, which in fact can do the get() call for you, to better picture what i'm talking about imagine the following case:

@process()
def calc_a(foo, bar=0):
    return foo + bar
@process()
def calc_b(foo, bar=0):
    ...
@process()
def calc_c(foo, bar=0):
    ...
@process()
def calc_d(foo, bar=0):
    ...

a = calc_a(x, z)
b = calc_b(x, z)
c = calc_c(x, z)
d = calc_d(x, z)

result = (a * b)
if result > pp:
    result += (c * d)
else:
    result -= (d / c)

If calc_X returns a proxy object, the get is performed automatically on first use, and the synchronization is handled automatically for you 'in the background', otherwise the code would look like:

@process()
def calc_a(foo, bar=0):
    return foo + bar
@process()
def calc_b(foo, bar=0):
    ...
@process()
def calc_c(foo, bar=0):
    ...
@process()
def calc_d(foo, bar=0):
    ...

a = calc_a(x, z)
b = calc_b(x, z)
c = calc_c(x, z)
d = calc_d(x, z)

result = (a.get() * b.get())
if result > pp:
    result += (c.get() * d.get())
else:
    result -= (d.get() / c.get())

I find it interesting because nowadays things tend to go into the closure's way, which IMHO are more confusing for this type of cases, the same code with closures 'a la javascript' or dart is indeed longer and a bit more obfuscated.
We did brainstorm further with Matteo and got some other interesting ideas that could be tried, imagine for example the decorator:

@parallel()
def calc_a(foo, bar=0):
    return foo + bar

The 'parallel' implementation could execute it threaded, or as a separate process or even as a remote process on another machine, of course it is not quite trivial to do neither to know beforehand what would be most efficient but there's the challenge! and an interesting one, you could give hints or preferences about where to execute, or collect timming information on the fly and use it later... quite interesting...

But then again, the post is getting quite long and I already started to babble.

-Mat

Monday, March 3, 2014

X ray vision for Murphy, real soon.

Yez, after poking around and some heavy coding, I'm almost ready to roll a new version with experimental support for x-rays.

What's that? as posted earlier, Murphy can see now when an application writes to disk, uses the network or registry and relate that information to the user interface, opening interesting possibilities in the automatic extraction of the user interface, it has some other uses too, for example a user pointed me out that it helps him when he does penetration testing to quickly point part of the surface area for attacks, more to come on that later.

Here's a demo / test app with  latest stuff.



And this is how does it looks if you click on those icons


The feature is still in an early stage so it will change and adapt based on usage and feedback.
Here's a closer look at the icons, which I pulled from the tango project (except for the registry icon that I draw myself)


Took a bit since I fixed other stuff on the way, plan is tomorrow or latest by wednesday to commit it to github. Check then if you're anxious to try it out.

In related news, there's quite some debt to be paid in the code, and just found out a nice piece of code to help me with some parts of it.
The web server does execute some stuff as separated processes as a way to maintain the web server stability and the code is a bit green, never matured enough, so this library (Pebble) is super cool for that, from the page:

@process(timeout=10)
def do_job(foo, bar=0):
    return foo + bar

That's all it takes! then you simply call do_job(1).get() as if it were a normal function!
Just to drool a bit... it even takes care of handling any exception it may raise, including stack trace and bring it back to the caller process, it is just damn cool!
Of course, the fine print is that anything you pass as parameter or return as result must be pickable but then again that is not an issue in most cases.

The decorator does the magic and returns a task object, on the task object you call .get(), that's why instead of
   do_job(1)
I wrote
  do_job(1).get()

I wonder if it could be simplified as much as to not to even have to use the get(), that'd be awesome.

Kudos to Matteo for the nice job there!

-Mat