Answers on here feel very vague.
The first emulators you will get usually are low-level, these are literally looking up the instruction sets, finding opcodes, and then doing lookups. Depending on the complexity of the instruction set can make for....... very hard to read code.
So typically brackets [ ] is memory access, and a switch case is used to match an exact number/ID and then run code accordingly. So you'd get:
C:
char memory[]; //loaded or filled out later.
int ip; //instruction pointer
switch(memory[ip]){
case 0: //add instruction
case 1: //sub instruction
case 2: //jump instruction
default: //catchall, usually undocumented or unsupported
}
You then simulate the changes from each instruction call 1:1.
Higher level emulation like JIT literally will recompile. It may do the first pass similar to above, but then write to a block of memory the equivalent instructions in native code making a second pass on the code lightning fast.
The differences being to emulate via interpretation is usually about 200x slower than hardware, while recompiling or other more recent methods may be say 5x slower.
As for games using GPU's, the emulator acts as a pass-through and interprets, then rebuilds the calls to call to DirectX/Vulkcan/OpenGL letting it wash it's hands of those details rather than fully emulating the video half as well.
edit: Upon further reflection, the c/switch case example is probably considered interpretation rather than low-level in terms of how it's emulated. I was considering the complexity of making the emulation and how it worked vs faster more advanced methods.