Last week AngelScript maintainer Andreas Jonsson announced on GameDev.net that the new version of the popular scripting language (2.30.1) added support for MIPS Release 2 CPUs via the Creator CI20 microcomputer.

We’ve reached out to Andreas and asked him to provide more details about his work for developers who are interested in using AngelScript for MIPS.

If you are interested the latest MIPS-related news and updates from Imagination, make sure to follow us on Twitter (@ImaginationTech, @MIPSguru, @MIPSdev), LinkedIn, Facebook and Google+.

Adding support for native calling conventions on Linux with MIPS

When I started working on the Linux version of AngelScript for MIPS, the first step was to do some research to find documentation on the ABI. I couldn’t find any official up-to-date specifications but the following links provided some useful information:

As you can see, they are all quite old and none of them even mentions how C++ class methods work.

The next step was to identify what compiler pre-defines were suitable for automatically detecting MIPS as the target processor while compiling the library. On GNUC this is done by running the following command:

$ echo . | g++ -dM -E -

This will print all the default pre-defines, out of which hopefully some will clearly identify the OS and CPU. I chose the following set to identify GNUC compiler for Linux with MIPS:

#define __GNUC__ 4
#define __linux__ 1
#define __mips__ 1
#define _ABIO32 1

I then used these to configure the target platform in as_config.h header file, which is were I’ve put 99% of the platform specific configurations in the AngelScript library to minimize pollution in the rest of the code.

As AngelScript already had support for MIPS on PlayStation Portable from an earlier contribution from Manu Evans (from Krome Studios) back in 2006, it made sense to first test if this code worked for Linux too. It compiled alright, but unfortunately the regression test suite I have didn’t run too well; the ABI used by the PlayStation Portable is not the same used by Linux.

So, now I had to do some reverse engineering on the ABI to figure out what was different. I did this by compiling simple functions to assembler code, like this:

$ g++ -S -c test_cdecl_return.cpp

The function below:

asINT64 reti64()
{
return 0x102030405L;
}

produced the following assembler code:

.cfi_startproc
.set    nomips16
.ent    _ZN15TestCDeclReturnL6reti64Ev
.type   _ZN15TestCDeclReturnL6reti64Ev, @function
_ZN15TestCDeclReturnL6reti64Ev:
.frame  $fp,8,$31               # vars= 0, regs= 1/0, args= 0, gp= 0
.mask   0x40000000,-4
.fmask  0x00000000,0
.set    noreorder
.set    nomacro
addiu   $sp,$sp,-8
.cfi_def_cfa_offset 8
sw      $fp,4($sp)
move    $fp,$sp
.cfi_offset 30, -4
.cfi_def_cfa_register 30
li      $2,33751040                     # 0x2030000
ori     $2,$2,0x405
li      $3,1                    # 0x1
move    $sp,$fp
lw      $fp,4($sp)
addiu   $sp,$sp,8
j       $31
nop
.set    macro
.set    reorder
.end    _ZN15TestCDeclReturnL6reti64Ev
.cfi_endproc

With the understanding gained from my previous research, and the MIPS instruction reference I found, it was possible to understand from the above assembler code that the 64-bit value returned by the function is loaded into the registers $2 and $3 before returning.

Using the same technique, I saw how registers were loaded with arguments for function calls.

With test cases I’ve previously written specifically for verifying platform portability, I could then test the code, starting with simple function calls and then moving up to more and more complex scenarios. The simplest test is to call a global function that takes no arguments and returns no value. The next test add passing in a single integer value, then two, three, etc.; then the test adds returning integer values.

Once those tests passed, the test cases moved up to add a mix of float values that often used a separate set of registers, then mixed primitives of different size and order to get the proper register and stack alignments.

Then the regression advanced to tests passing in objects by value to functions and returning by value. Here the tests used a variety of objects of different sizes, with different members, and different behaviors, as any combination of these can make the ABI decide to pass the object in registers, on stack, or on the heap. In this the MIPS ABI was really simple (the worst ABI I’ve worked with so far is the PowerPC ABI that breaks it down by members to pass each member in different CPU registers depending on the object).

Finally the tests also verified how different types of class methods work: normal non-virtual class methods, virtual class methods, class methods for classes with multiple inheritance, etc.

Once all the tests written for platform portability verification passed, I moved on to running the full suite of the regression tests.

All this of course involved a lot of trial and error, with debugging sessions stepping through the code instruction by instruction to verify exactly how the CPU registers were modified in particular scenarios and so on.

All in all, the work probably didn’t take more than a standard work-week, spread out over several nights and weekends of course since AngelScript is only a hobby project of mine.

Flashing the Ci20 with Android

To install Android on the Creator Ci20, I mostly followed the excellent instructions on elinux.org. However, the win32diskimager tool they recommend for writing the image file to the SD card wouldn’t work for me. Instead I used the free ImageUSB tool, which worked perfectly (not to mention that it is much more user friendly).

Once Android was up and running on the Ci20, it was necessary to make a few adjustments to allow installing unknown apps and debugging apps on it. To do this I followed the instructions below:

Then of course, the Ci20 board needed to be connected to the PC so the app can be easily deployed; this article shows you how to do that.

Adding support for native calling conventions on Android with MIPS

Adding support for Android required a different kind of work, mostly due to my inexperience working with Android. I had never written an app for Android so I had to read up on how to use the various tools provided by Google. Luckily the documentation for the Android SDK and Android NDK is quite good, so I didn’t have to do a lot of guess work. There are a lot of things that need to be done in preparation before you can run your first app on the Android device, especially if the app is written in C++ and has to use the NDK.

To test the Android version of the library I used the very same regression test suite I used for Linux and all other platforms. It is just wrapped in a very light Java shell without any graphical interface, and the stdout stream is routed to the Android logcat system log.

Even though the NDK documentation says it is possible to debug the native code, I unfortunately didn’t manage to figure out how to get it working. Luckily though I didn’t really have to do any debugging as once I got the app running everything worked smoothly as Android uses the exact same MIPS ABI as Linux.

About the author: Guest Blog

Profile photo of guestblog