Iddad Tech Blog

Aller au contenu | Aller au menu | Aller à la recherche

mardi 27 septembre 2016

Routeur A5-V11 et Ralink RT5350

Pour faire de l'IoT, a part l'architecture Arm qui règne en maître dans l'embarqué, on peut aussi utiliser l'architecture MIPS, assez souvent rencontrée sur les routeurs.

Justement, un petit routeur 3G qui a récemment fait parler de lui à cause de son petit prix - moins de 7€ port compris sur ebay, utilise un processeur Ralink avec une architecture MIPS.

A5-V11

Aussi connu sous le nom "A5-V11", ce périphérique est vendu sous la dénomination un peu trompeuse de "routeur 3G/4G" mais n'a en fait aucune capacité 3G, on peut juste y brancher un modem 3G USB et partager ensuite la connexion via Wifi ou via Ethernet. Il existe aussi des versions avec batteries intégrées pour quelques euros de plus.

Caractéristiques

Processeur : Ralink RT5350F @ 360 Mhz
Flash : 4 Mo
Ram : 32 Mo
port USB : Host USB 2.0
Ethernet : 100 Mb
Wifi : 802.11n

Consommation

À titre indicatif, voici des chiffres trouvés sur le forum OpenWrt :

Wi-Fi activé, ethernet désactivé: 194mA
Wi-Fi désactivé, ethernet désactivé: 112mA

La Consommation est raisonable mais la datasheet du RT5350 ne mentionne pas de mode d'économie d'énergie particulier. C'est assez dur de comparer la consommation de 2 systèmes, mais si on s'en tient au comparatif de Adafruit, la consommation de l'A5-V11 se situe entre celle d'une Raspberry Pi A et d'une Arduino Yun.

Système d'exploitation

Livré avec un firmware Made In China qui remplit bien son rôle de routeur, l'A5-V11 peut facilement être reflashé avec OpenWRT , une distribution Linux optimisée pour les routeurs. Il suffit d'uploader un firmware OpenWrt via l'interface web, et c'est fait! Par contre mieux vaut souder un port série et avoir un adaptateur USB-FTDI à portée de main car on a vite fait de se retrouver dans une solution où l'on doit reflasher un firmware via Uboot.

Après avoir installé OpenWrt, on dispose d'un "vrai" Linux basé sur Busybox, Opkg et un noyau 3.18 assez ancien, mais avec beaucoup de drivers backportés. Avec seulement 4 Mo de stockage, on ne va pas trop en demander, d'autant plus que le dépôt de paquets est plutôt bien fourni.

Grâce à Extroot, on peut stocker une partie de l'OS sur une clef USB, mais il faut vraiment choisir les modules noyau un à un : après m'être débarrassé du support IPv6, de PPP et du serveur SSH, je n'avais toujours pas assez de place pour ajouter le support Ext4 nécessaire à ExtRoot. On peut cependant se contenter du support FAT pour lire les clefs USB formatées pour Windows et y stocker ses applications et données.

Entrée/sorties disponibles

Sur le routeur de base, on a accès au port série (soudure nécessaire), à 2 leds et à un bouton .

A5-V11-serie

Pour allumer les leds:

root@(none):/# cd /sys/class/leds/a5-v11\:blue\:system/
root@(none):/sys/devices/gpio-leds/leds/a5-v11:blue:system# echo none > trigger
# on allume le GPIO/ la LED
root@(none):/sys/devices/gpio-leds/leds/a5-v11:blue:system# echo 1 > brightness
# on éteint le GPIO/ la LED
root@(none):/sys/devices/gpio-leds/leds/a5-v11:blue:system# echo 0 > brightness

Pour lire le GPIO associé au bouton reset il suffit d'utiliser le script /etc/rc.button/reset

Les cartes Olimex et VoCore offrent beaucoup plus de possibilités:

-2 Relais 15A/240VAC
-27 GPIOs (dont 24 libres)
-1 port SPI
-1 port I2C
-1 port I2S
-1 port JTAG

Performances

sysbench n'étant pas disponible dans le repository d'OpenWRT, je me suis contenté du benchmark intégré à OpenSSL:

SSL Benchmark

Comparé au Raspberry Pi B+, ce dernier est 2 à 3 fois plus performant que l'A5-V11, et il a beaucoup plus de mémoire.

Quant au Raspberry 3, il est loin devant, surtout si on considère que ce test est monothread et que le Raspberry Pi 3 a 4 coeurs contre seulement 1 pour le Ralink RT5350. Le RT5350 datant de 2010 et coûtant beaucoup moins cher, ils ne jouent pas dans la même cour.

Pour ce qui est de la rapidité du démarrage, avec un temps de boot + établissement de la connexion Wifi de presque 30 secondes, l'A5-V11 démarre assez lentement. On doit sans doute pouvoir accélerer un peu les choses en retirant certains modules de routage IP et de firewall inclus par défaut dans OpenWRT.

Conclusion

Peut on détourner ce routeur pour faire de l'IOT? Oui, mais le point noir se situe au niveau du stockage , 4Mo c'est bien maigre pour un Linux surtout si on tient compte du comportement de JFFS2... Mieux vaut s'orienter sur des cartes dédiées Olimex ou VoCore qui ont 8 Mo de stockage, ce qui permettra de se sentir plus à l'aise et d'inclure tous les drivers nécessaires sans sacrifier le port USB ou le bus SPI pour une carte SD.

L'intérêt de solutions à base de RT5350 réside dans leur petit prix et dans le support d'OpenWRT. Par exemple, si on a besoin de relier une machine industrielle, ou un dispositif médical au réseau via ethernet/wifi et en plus d'offrir un port USB pour un l'import/export de fichiers, c'est un solution quasi clef en main.

Par contre pour les utilisations type capteurs connectés où l'on doit avoir une consommation minimale et une grande autonomie, mieux vaut choisir une autre architecture.

vendredi 29 mai 2015

VPN, X509 extension, and weird SSL behaviour

Recently, I experienced a strange behaviour when I tried to use Twitter from my VPN:
The front page was not displaying properly while using my VPN but was looking normal when I was using it on another computer without VPN, It was looking as if the CSS couldn't be loaded.
So, out of curiosity, I checked the certificate on my PC with VPN and on my PC without VPN and their md5 did not match!

# the SSL cert I see from my DSL internet connection
:~$ md5sum goodtwitter.com.cert
4b1f9f49b74ac18fa20b32fd0f570aa9  goodtwitter.com.cert
# the SSL cert I see from my VPN
:~$ md5sum badtwitter.com.cert
 e8ed041e9751a8bf84e217037239ef08  badtwitter.com.cert

And even worse, The html page source code of both websites didn't match either! Was I victim of MITM attack?
After a quick check, the differences in the webpage seem to be language related and also, the ad source is different, which makes sense since my VPN's ip is associated to another country.
One problem solved.

When I was checking the suspicious cert on Twitter, one of the field seemed bogus when displayed by OpenSSL:

openssl x509 -in badtwitter.com.cert
[...]
           1.3.6.1.4.1.11129.2.4.2: 
                ...k.i.v.......X......gp 
.....K..+,.....G0E.!............"p.....`v!.+MgT..f..H. .?.1.).C...t>c. |%...5 
.....+@...w.V.../.......D.>.Fv....\....U.......K..-?.....H0F.!...4..... ..X.K.....D.e......._h..!..F..T.w.~/N.*J.w&.#.q....... ....v.h....d..:...(.L.qQ]g..D. 
g..OO.....K..+H.....G0E.!.....1....W.9~....GS.W.....^...C.. ^4.M&9$.~.."Sd^.p4..r....'..;... 
[...]

This line was only appearing on my VPN PC cert. It was really starting to look fishy and raised some questions:

-Do Twitter really have several certificates?
-Does using a VPN changes something when you access HTTPS websites, a bit like using a misconfigured HTTPS proxy will trigger warnings in your browser?
-Was I victim of some kind of Man In the Middle attack?
-And if it is a MITM, why doesn't Firefox give me warnings?

After some googling, the boggus part in the certificate appears to be the Embedded SCT :

Embedded SCT(Signed Certificate Timestamps) is a new certificate extension/OID (1.3.6.1.4.1.11129.2.4.2) used to implement certificate transparency. According to RFC6962, it allows a log of SCTs to be stored on a server to invalidate certificates that could be produced by a rogue CA.

This makes more sense than a global MITM attack on my VPN provider or a very targeted attack on my PC.

So I tried to compile an up to date version of gnuTLS to see if it knows this X.509 v3 extension :
The answer is no, I got the following output:

                Unknown extension 1.3.6.1.4.1.11129.2.4.2 (not critical): 
                        ASCII: ...k.i.v.......X......gp.<5.......w.........K..+,.....G0E.!............"p.....`v!.+MgT..f..H. .?.1.).C...t>c. |%...5......+@...w.V.../.......D.>.Fv....\....U.......K..-?.....H0F.!...4..... ..X.K.....D.e......._h..!..F..T.w.~/N.*J.w&.#.q....... ....v.h....d..:...(.L.qQ]g..D..g..OO.....K..+H.....G0E.!.....1....W.9~....GS.W.....^...C.. ^4.M&9$.~.."Sd^.p4..r....'..;...

So, how do I verify the certificate signature now?
On my main PC,GnuTLS tells me the certificate can be trusted

:~$ gnutls-cli  -p 443 www.twitter.com
Processed 172 CA certificate(s).
Resolving 'www.twitter.com'...
Connecting to '199.59.148.10:443'...
- Certificate type: X.509
- Got a certificate list of 2 certificates.
- Certificate[0] info:
 - subject `jurisdictionOfIncorporationCountryName=US,jurisdictionOfIncorporationStateOrProvinceName=Delaware,businessCategory=Private Organization,serialNumber=4337446,C=US,postalCode=94103-1307,ST=California,L=San Francisco,street=1355 Market St,O=Twitter\, Inc.,OU=Twitter Security,CN=twitter.com', issuer `C=US,O=Symantec Corporation,OU=Symantec Trust Network,CN=Symantec Class 3 EV SSL CA - G3', RSA key 2048 bits, signed using RSA-SHA256, activated `2014-09-10 00:00:00 UTC', expires `2016-05-09 23:59:59 UTC', SHA-1 fingerprint `add53f6680fe66e383cbac3e60922e3b4c412bed'
        Public Key ID:
                269a19a38828c1dd701ba0ca2c98dbc6e14f373e
        Public key's random art:
                +--[ RSA 2048]----+
                |   .             |
                |  . .            |
                | . . o           |
                |*.. + o          |
                |*+..oo. S        |
                |+B o * o         |
                |* * = o          |
                |.. o oE.         |
                |    . ..         |
                +-----------------+

- Certificate[1] info:
 - subject `C=US,O=Symantec Corporation,OU=Symantec Trust Network,CN=Symantec Class 3 EV SSL CA - G3', issuer `C=US,O=VeriSign\, Inc.,OU=VeriSign Trust Network,OU=(c) 2006 VeriSign\, Inc. - For authorized use only,CN=VeriSign Class 3 Public Primary Certification Authority - G5', RSA key 2048 bits, signed using RSA-SHA256, activated `2013-10-31 00:00:00 UTC', expires `2023-10-30 23:59:59 UTC', SHA-1 fingerprint `e3fc0ad84f2f5a83ed6f86f567f8b14b40dcbf12'
- Status: The certificate is trusted. 
[...]

But on my VPN PC gnutls-cli tells me that www.twitter.com can not be trusted!

:~# gnutls-cli -p 443 www.twitter.com
Resolving 'www.twitter.com'...
Connecting to '199.59.148.10:443'...
- Certificate type: X.509
 - Got a certificate list of 2 certificates.
 - Certificate[0] info:
  - subject `jurisdictionOfIncorporationCountryName=US,jurisdictionOfIncorporationStateOrProvinceName=Delaware,businessCategory=Private Organization,serialNumber=4337446,C=US,postalCode=94103-1307,ST=California,L=San Francisco,STREET=1355 Market St,O=Twitter\, Inc.,OU=Twitter Security,CN=twitter.com', issuer `C=US,O=Symantec Corporation,OU=Symantec Trust Network,CN=Symantec Class 3 EV SSL CA - G3', RSA key 2048 bits, signed using RSA-SHA256, activated `2014-09-10 00:00:00 UTC', expires `2016-05-09 23:59:59 UTC', SHA-1 fingerprint `add53f6680fe66e383cbac3e60922e3b4c412bed'
 - Certificate[1] info:
  - subject `C=US,O=Symantec Corporation,OU=Symantec Trust Network,CN=Symantec Class 3 EV SSL CA - G3', issuer `C=US,O=VeriSign\, Inc.,OU=VeriSign Trust Network,OU=(c) 2006 VeriSign\, Inc. - For authorized use only,CN=VeriSign Class 3 Public Primary Certification Authority - G5', RSA key 2048 bits, signed using RSA-SHA256, activated `2013-10-31 00:00:00 UTC', expires `2023-10-30 23:59:59 UTC', SHA-1 fingerprint `e3fc0ad84f2f5a83ed6f86f567f8b14b40dcbf12'
- The hostname in the certificate matches 'www.twitter.com'.
- Peer's certificate issuer is unknown
- Peer's certificate is NOT trusted

Just to be sure, I tried with the version of GnuTls I had just compiled on my other PC and this time it worked!
I don't think of myself as a person worth spying on, but if I did, those technical glitches would have me very worried.
Moral of the story:
keep calm and update your system

Q:Do Twitter really have several certificates?
A:Yes
Q:Does using a VPN changes something when you access HTTPS websites, a bit like when you use a HTTPS proxy?
A: No. The VPN shoudn't mess with HTTPS at all.
Q:Was I victim of some kind of MITM attack?
A:No
Q:And if it is one, why doesn't Firefox give me any kind of warning?
A:Because Firefox probably understand this X509 extension
Q:Why doesn't I have the same certificate in both cases?
A:Don't know, this still puzzles me
Q:Why doesn't the page display properly on my Kali Linux when I use the VPN? A:No answer yet

jeudi 7 mars 2013

Android WebRTC support

One of our projects involves video conferencing and we wanted to use WebRTC to easily support a wide variety of platforms.
WebRTC designs a set of 3 HTML5 apis : getUserMedia, RTCDataChannel and RTCPeerConnection.
The first step was to evaluate how well WebRTC was supported on Android (on Nexus 4 to be specific), so we first looked at the main HTML5 browsers available on Google's OS:

  • Firefox 20.0 Beta
  • Chrome 18.0
  • Opera Mobile 12.10
  • Bowser 0.1.4

After searching through the forums it appears that neither Chrome nor Firefox supports WebRTC in their Android versions.
In both cases it's “in progress”, issues can be tracked here and there.
WebRTC.org mentions Opera, but nothing regarding its Mobile version.
Bowser is supposed to handle WebRTC properly since it was developed by Ericsson for this purpose.

We tried several online demos on Android and on Linux to get a better picture of the current WebRTC support status:

Shiny Demos

These demos were written by the Opera software team as a showcase of Opera's HTML5 support.
The getUserMedia section on the website only tests that, it doesn't test WebRTC itself which also includes RTCDataChannel and RTCPeerConnection APIs.
The Explode demo doesn't work on Firefox Nightly 2013-03-05 but Warholiser does. Let's hope it will be ok with the next stable Firefox release.

Simpl.info getUserMedia test

This is also a test that focuses on getUserMedia API. By default it only works with the Linux version of Chrome, not with Firefox or Opera. After modifying the code so that it calls getUserMedia() the same way as it is called in ShinyDemos, the demo was working on all 3 desktop browsers and on Opera Mobile as well.
This test also highlights the fact that Opera only supports Camera access and not microphone access.

Ericsson demo

Ericsson demo doesn't even work with Chrome and Firefox, the 2 leading Desktop WebRTC browsers, which is quite worrying. Ericsson Browser Bowser only supports the H.264 codec and not the VP8 codec that Firefox and Chrome support so it wouldn't be able to communicate with them.
Furthermore, according to the developer comments on the blog the “websockets” (I think it means RTCPeerConnection) is not implemented in the Android version, so running Bowser on Android and IOS wouldn't work either.
The only configuration where the Ericsson browser could work would be with 2 IOS devices and I didn't have the opportunity to test this.

Google AppRTC

This was the key application I wanted to get working since it allows us to do some actual videoconferencing. Unfortunately it only works on Chrome and Firefox Nightly desktop versions.
After looking at the code I noticed the demo was only handling 2 cases : Chrome and Firefox. So accessing it with Opera would just result in an error “GetUserMedia failed. Is this a WebRTC capable browser?
I tried to tweak the demo so it would run with Opera but I hit a wall : Opera doesn't support RTCPeerConnection API so there is no way to get it working. The Android version of Bowser would get rejected on the same ground.

Conclusion

Here is a summary of the tests done on Linux(with Chrome 25, Firefox Nightly 2013-03-05 and Opera 12.14) and Android:

webrtc-array

As of March 2013, there is no viable WebRTC videoconferencing solution on Android.
Bowser is an experimental browser and can't be used in a commercial environment.
Opera Mobile already partially supports getUserMedia for camera access, but not for the microphone and as RTCPeerConnection is not supported at all Opera can't be used for videoconferencing. Note that the current Opera Mobile Beta (14.0.1025.52315) doesn't support getUserMedia at all, if you want to experiment with this API, stay on the stable 12.10.
Google and Mozilla both have plans to support WebRTC, so we'll just have to wait...

14/03/13 Edit - Chrome for android now supports WebRTC in its latest Beta!

vendredi 22 février 2013

GanttProject Python companion script

I started using GanttProject for the planning part of one of my projects and at first I was quite happy with it. But this tool is mostly designed to draw Gantt diagrams, it lacks some features to make it really useful for project planning. I noticed GanttProject features wouldn't fit my needs only after creating a reasonably sized diagram.

So instead of trying to export my project to another tool or to start from scratch again, I decided to take advantage of the fact that GanttProject file format is XML based. So I wrote a script that calculate the cost of a project, the main feature I was missing in this tool.

Here are the steps it takes to do it :

  • First add a custom column named 'price' to resources and set its type to integer and give it a default value.

ganttproject-custom-fields-manager

  • Then enter a price for each of your resources.

ganttproject-resource-prices

  • Save your project and execute the script with the project file as only argument:


On Linux / Mac OS X:

./cost_calc.py dummy_plan6.gan 
price attribute-id is tpc1
default resource price is 400
Warning: can't find price property for resource
resource 1 is coder B and costs 400
resource 0 is Admin A and costs 500.0
resource 3 is tester D and costs 300.0
resource 2 is GFX guy C and costs 450.0

resource <coder B> number of day=21.0 cost=8400.0
resource <Admin A> number of day=5.0 cost=2500.0
resource <tester D> number of day=15.5 cost=4650.0
resource <GFX guy C> number of day=5.5 cost=2475.0

total number of days = 47.0
total_cost = 18025.0


On Windows:
Windows users will probably need to download and install Python 2.7. Then, either write a .bat script like this one:

c:\python27\python.exe cost_calc.py %1

or start the command manually: cost-calc-windows-screenshot.png

I can't guarantee this script will work in all the cases but so far it works for my Gantt project. I hope it will be useful for you as well!

You can download the script here. Feel free to improve it.

lundi 25 juillet 2011

[QtCreator] Reclaim the power of your multi-core CPU!

So you have just bought a new PC with a multi-core CPU to replace your aging single core machine and you expect to see a huge improvement when you compile a Qt project. And you are going to be disappointed...
By default, QtCreator will only use one core at a time, you'll need to manually specify the number of cores you want to use!
To configure Qt properly you'll need to go to the project tab on the left and add -j <number of core> to the make parameters of your project as illustrated on the following picture (excuse my French!)

QtCreator multicore configuration

This solution is also mentionned somewhere on Qt's website but I think most people don't know about it.

The -j parameter in make specifies the number of job to run simultaneously. It is supposed to match the number of available cores on the system. Some say that it should be <number of core + 1> to deal with I/O delays, but I didn't observe any improvement by doing so.

When compiling the qtdemo example we can clearly see a difference in terms of CPU usage in the system monitor :

Without the -j parameter, most of the time there is only a single core used at a time:

qtdemo compiled without the -j parameter

But with the -j parameter, all the cores are used at 100% at the same time and the compilation time is much shorter:

qtdemo compiled with the -j parameter

By manually measuring with the time command, compiling qtdemo takes 29 seconds when using a single core and only 9 seconds when using 4 cores. I would have expected to have a 4x performance improvement but 3.2x faster is quite nice, everything can't scale perfectly in the real world.

jeudi 23 juin 2011

SSD performances for C/C++ development

Solid State Disks may seem like a luxury reserved to high end or noise free PCs, but can they improve your productivity? Let's find out!

Test configuration:

  • - Intel Core i7 2500k
  • - AsusP8P67 Rev 3.0
  • - 4Gb ram PC12800
  • - SATA3 Western Digital Caviar blue 1 Tb hard disk
  • - SATA3 Crucial M4 64Gb SSD
  • - Ubuntu 11.04 64bits


The 2 disks are partitioned roughly the same :

  • - a 250 Mb /boot partition
  • - a 8Gb swap
  • - a 50+ GB Ext4 root partition

Apparently the way you partition your disk can affect performance. Just in case, I followed the instructions I found here.
I copied the Western Digital HDD root partition to the Crucial root partition and updated /etc/fstab and grub. This way I'm sure I have the exact same operating system on both hard disks.
I also added the discard option to the /etc/fstab SSD entries to ensure the TRIM will happen correctly on my SSD.

Raw performances

With the hdparm -tT command, we can measure the absolute speed of each disk:

Western Digital HDD

Timing cached reads: = 11092.48 MB/sec
Timing buffered disk reads= 129.09 MB/sec

Crucial SSD

Timing cached reads: 23146 MB in 2.00 seconds = 11584.70 MB/sec
Timing buffered disk reads: 1046 MB in 3.00 seconds = 348.37 MB/sec

So The SSD is nearly 3 times as fast as the HDD! That's a good start, what does it change for real life applications?

Boot time

The first obvious performance impact should be the boot time. In order to get accurate results, we will use bootchart. Bootchart checks loading and execution times of various processes involved in the boot sequence. It draws nice graphics and is a bit more reliable than just timing the boot process.

Booting on the HDD

HDD boot time

Booting on the SSD

SSD boot time

Bootchart clearly shows when that with a regular hard disk, the system spends a lot of time waiting for I/Os.
And the reported boot time is 8.21 seconds with the SSD and 21.26 seconds with the HDD! In practice, it takes less than 10 seconds between the time I press enter in Grub and the time when Gnome is up and running, it's less than the BIOS takes to start Grub from a cold boot!

Compilation

Booting quickly is nice, but a lot of people only boot their machine once a day. Let's take the case of a C/C++ developer and see how a SSD improves compilation time.
I used the time command to measure the compilation time, and the compilation tests were launched just after a boot, to minimise all the "caching" that could happen involuntarily. Compilation was set up to use 4 simultaneous jobs (make -j 4). The latest Kernel (2.6.39.1) and Qt examples (4.7) were used for this test.

SSD impact on compilation time The performance boost is not as obvious as for the boot time, but still, your code should compile 10% faster if you use a SSD.
Although it is not in this graphic, I did some tests with Git, it involved adding the kernel source to a git repository and committing all the files. Maybe this task was not intensive enough because I couldn't see any improvement when using the SSD rather then the HDD.

Conclusion

So a SSD can give you a good 10% performance boost when compiling C/C++ projects and make you boot twice as fast.
SSD are still expensive, but if you already have the best CPU your motherboard can take, it is the easiest way to increase the reactivity of your PC.
Also, SSD are supposed to be good at multiple simultaneous random accesses, and this exactly what happens when you have multiple cores doing different tasks. This test was done with a 4 cores CPU, if we repeated it on a core i7 2600k (8 cores), I would expect the SSD to bring an even bigger performance boost.