Lots of people are interested in the analysis of computer programs represented as assembly language. The reasons for this vary. Some are interested because they want to analyze programs that, for whatever reason, they do not have the source code to. Others want to prove properties of programs that they generate. Some are analyzing malicious pieces of software.
This kind of analysis is usually motivated by attempting to assure software or analyze malicious software. The latter case is much more annoying, because when we attempt to assure software we can draw a box around behavior that is allowed and reject any programs with behavior outside the box. Our system doesn't have to understand what happens in the behavior outside the box, it just has to recognize the boundary violation and can blanket-reject.
Consider the following assembly program:
proc: call _a _a: pop eax mov esp, eax sub esp, 2 xor ecx, ecx sub ecx, 0x1501 push ecx ret
The question is, what does the procedure at label proc return? Think about this question, look at the code, but don't actually compile and run this in a program (that's cheating).
There's a dirty trick hidden in this snippet. I'll highlight the first part of the trick, which should be obvious:
mov esp, eax
Why would any sane program manually manipulate the stack pointer in this way? When we're dealing with programming languages and runtimes, questions that begin with "why would any sane program..." usually have an answer. In this case, a programmer with a background in traditional C code would ask why for good reason - the C compilers and runtimes that the programmer would encounter in their everyday lives would not alter the stack in this way. They view the stack in a traditional sense, where the value in the stack pointer register is gifted to the program by the operating system and there is no reason to do anything with it other than add and subtract from it.
However, what if your C compiler and runtime implemented and supported segmented stacks? The C standard doesn't specify that a program or threads stack must be contiguous. There are proposals and implementations where the programs stack is split up into multiple 'stacklets' and a functions prologue initializes the current stack frame to a stacklet of an appropriate size.
We can't discard any program that loads a new stack address as wholly invalid then, but we can give it much closer attention! Once we load a new value into esp, any subsequent push or pop instructions will be loads and stores from this new base. So what value do we load into esp?
We load the address, in memory, of the pop eax instruction! This might be quite surprising. What happens when our stack overlays our code? Well, the push instruction is just a memory write, so, we'll modify our own code. Or will we?
This is a curious case of a time when the semantics of your assembly, taken out of context, are not the whole story. The page permissions of where the code resides dictate whether or not the program halts. The page permissions could be set by some other part of the program code, the executable loader, or on some operating systems, another process via the operating system kernel. If your model of instruction semantics are faithful, the push ecx instruction has a branch to some location that represents a page fault handler.
For static analysis, dynamic analysis actually give us some hints on how to proceed from here. Dynamic analysis frameworks (like PIN, Valgrind, DynInst, and so on) generally accommodate self-modifying code. Dynamic frameworks generally invalidate cached facts about program semantics when they observe a write to program code. Then, when they fetch, decode and execute from a modified location, they perform decoding again instead of referring to cached program text.
A static analysis framework can do this also, but it has a few problems that dynamic analysis systems don't. The static analysis system has to determine that a memory write steps on future program state code. If the static analysis system can do this, then it can create a new address space to represent this future timepoint when the program has changed.
This can help with the determination of self-modification. However, this does not help with the permission semantics attached to individual memory locations. Perhaps our semantics-carrying analysis languages need to draw arrows for memory accesses and annotate them with permissions-based predicates. Of course, given that in most modern operating systems, those memory permissions can be modified anywhere in the program, and determining that a call to the operating system modifies that particular address could be infeasible or impossible, it's questionable what good these annotations would serve.
Summary and Our Tools
There are two powerful tools that static analysis systems have to help them deal with evil code, time-sensitive address spaces and precise instruction semantics. With instruction semantics, we can easily recognize the fragment as self-modifying. With an abstract domain for executable code, we could (probably) reason about the new control flow graph enough to determine what the program would do if the memory writes succeed.