Older blog entries for LaForge (starting at number 321)

Invited keynote + TTCN-3 talk at netdevconf 2.2 in Seoul

It was a big surprise that I've recently been invited to give a keynote on netfilter history at netdevconf 2.2.

First of all, I wouldn't have expected netfilter to be that relevant next to all the other [core] networking topics at netdevconf. Secondly, I've not been doing any work on netfilter for about a decade now, so my memory is a bit rusty by now ;)

Speaking of Rusty: Timing wise there is apparently a nice coincidence that I'll be able to meet up with him in Berlin later this month, i.e. hopefully we can spend some time reminiscing about old times and see what kind of useful input he has for the keynote.

I'm also asking my former colleagues and successors in the netfilter project to share with me any note-worthy events or anecdotes, particularly also covering the time after my retirement from the core team. So if you have something that you believe shouldn't miss in a keynote on netfilter project history: Please reach out to me by e-mail ASAP and let me know about it.

To try to fend off the elder[ly] statesmen image that goes along with being invited to give keynotes about the history of projects you were working on a long time ago, I also submitted an actual technical talk: TTCN-3 and Eclipse Titan for testing protocol stacks, in which I'll cover my recent journey into TTCN-3 and TITAN land, and how I think those tools can help us in the Linux [kernel] networking community to productively produce tests for the various protocols.

As usual for netdevconf, there are plenty of other exciting talks in the schedule

I'm very much looking forward to both visiting Seoul again, as well as meeting lots of the excellent people involved in the Linux networking subsystems. See ya!

Syndicated 2017-10-09 22:00:00 from LaForge's home page

Ten years Openmoko Neo1973 release anniversary dinner

As I noted earlier this year, 2017 marks the tenth anniversary of shipping the first Openmoko phone, the Neo1973.

On this occasion, a number of the key people managed to gather for an anniversary dinner in Taipei. Thanks for everyone who could make it, it was very good to see them together again. Sadly, by far not everyone could attend. You have been missed!

The award for the most crazy attendee of the meeting goes out to my friend Milosch, who has actually flown from his home in the UK to Taiwan, only to meet up with old friends and attend the anniversary dinner.

You can some pictures in Milosch's related tweet.

Syndicated 2017-10-08 22:00:00 from LaForge's home page

On Vacation

In case you're wondering about the lack of activity not only on this blog but also in git repositories, mailing lists and the like: I've been on vacation since September 13. It's my usual "one month in Taiwan" routine, during which I spend some time in Taipei, but also take several long motorbike tours around mostly rural Taiwan.

You can find the occasional snapshot in my twitter feed, such as the, pictures, here and there.

Syndicated 2017-10-04 22:00:00 from LaForge's home page

Purism Librem 5 campaign

There's a new project currently undergoing crowd funding that might be of interest to the former Openmoko community: The Purism Librem 5 campaign.

Similar to Openmoko a decade ago, they are aiming to build a FOSS based smartphone built on GNU/Linux without any proprietary drivers/blobs on the application processor, from bootloader to userspace.

Furthermore (just like Openmoko) the baseband processor is fully isolated, with no shared memory and with the Linux-running application processor being in full control.

They go beyond what we wanted to do at Openmoko in offering hardware kill switches for camera/phone/baseband/bluetooth. During Openmoko days we assumed it is sufficient to simply control all those bits from the trusted Linux domain, but of course once that might be compromised, a physical kill switch provides a completely different level of security.

I wish them all the best, and hope they can leave a better track record than Openmoko. Sure, we sold some thousands of phones, but the company quickly died, and the state of software was far from end-user-ready. I think the primary obstacles/complexities are verification of the hardware design as well as the software stack all the way up to the UI.

The budget of ~ 1.5 million seems extremely tight from my point of view, but then I have no information about how much Puri.sm is able to invest from other sources outside of the campaign.

If you're a FOSS developer with a strong interest in a Free/Open privacy-first smartphone, please note that they have several job openings, from Kernel Developer to OS Developer to UI Developer. I'd love to see some talents at work in that area.

It's a bit of a pity that almost all of the actual technical details are unspecified at this point (except RAM/flash/main-cpu). No details on the cellular modem/chipset used, no details on the camera, neither on the bluetooth chipset, wifi chipset, etc. This might be an indication of the early stage of their plannings. I would have expected that one has ironed out those questions before looking for funding - but then, it's their campaign and they can run it as they see it fit!

I for my part have just put in a pledge for one phone. Let's see what will come of it. In case you feel motivated by this post to join in: Please keep in mind that any crowdfunding campaign bears significant financial risks. So please make sure you made up your mind and don't blame my blog post for luring you into spending money :)

Syndicated 2017-09-02 22:00:00 from LaForge's home page

The sad state of voice support in cellular modems

Cellular modems have existed for decades and come in many shapes and kinds. They contain the cellular baseband processor, RF frontend, protocol stack software and anything else required to communicate with a cellular network. Basically a phone without display or input.

During the last decade or so, the vast majority of cellular modems come as LGA modules, i.e. a small PCB with all components on the top side (and a shielding can), which has contact pads on the bottom so you can solder it onto your mainboard. You can obtain them from vendors such as Sierra Wireless, u-blox, Quectel, ZTE, Huawei, Telit, Gemalto, and many others.

In most cases, the vendors now also solder those modules to small adapter boards to offer the same product in mPCIe form-factor. Other modems are directly manufactured in mPCIe or NGFF aka m.2 form-factor.

As long as those modems were still 2G / 2.5G / 2.75G, the main interconnection with the host (often some embedded system) was a serial UART. The Audio input/output for voice calls was made available as analog signals, ready to connect a microphone and spekaer, as that's what the cellular chipsets were designed for in the smartphones. In the Openmoko phones we also interfaced the audio of the cellular modem in analog, exactly for that reason.

From 3G onwards, the primary interface towards the host is now USB, with the modem running as a USB device. If your laptop contains a cellular modem, you will see it show up in the lsusb output.

From that point onwards, it would have made a lot of sense to simply expose the audio also via USB. Simply offer a multi-function USB device that has both whatever virutal serial ports for AT commands and network device for IP, and add a USB Audio device to it. It would simply show up as a "USB sound card" to the host, with all standard drivers working as expected. Sadly, nobody seems to have implemented this, at least not in a supported production version of their product

Instead, what some modem vendors have implemented as an ugly hack is the transport of 8kHz 16bit PCM samples over one of the UARTs. See for example the Quectel UC-20 or the Simcom SIM7100 which implement such a method.

All the others ignore any acess to the audio stream from software to a large part. One wonders why that is. From a software and systems architecture perspective it would be super easy. Instead, what most vendors do, is to expose a digital PCM interface. This is suboptimal in many ways:

  • there is no mPCIe standard on which pins PCM should be exposed
  • no standard product (like laptop, router, ...) with mPCIe slot will have anything connected to those PCM pins

Furthermore, each manufacturer / modem seems to support a different subset of dialect of the PCM interface in terms of

  • voltage (almost all of them are 1.8V, while mPCIe signals normally are 3.3V logic level)
  • master/slave (almost all of them insist on being a clock master)
  • sample format (alaw/ulaw/linear)
  • clock/bit rate (mostly 2.048 MHz, but can be as low as 128kHz)
  • frame sync (mostly short frame sync that ends before the first bit of the sample)
  • endianness (mostly MSB first)
  • clock phase (mostly change signals at rising edge; sample at falling edge)

It's a real nightmare, when it could be so simple. If they implemented USB-Audio, you could plug a cellular modem into any board with a mPCIe slot and it would simply work. As they don't, you need a specially designed mainboard that implements exactly the specific dialect/version of PCM of the given modem.

By the way, the most "amazing" vendor seems to be u-blox. Their Modems support PCM audio, but only the solder-type version. They simply didn't route those signals to the mPCIe slot, making audio impossible to use when using a connectorized modem. How inconvenient.

Summary

If you want to access the audio signals of a cellular modem from software, then you either

  • have standard hardware and pick one very specific modem model and hope this is available sufficiently long during your application, or
  • build your own hardware implementing a PCM slave interface and then pick + choose your cellular modem

On the Osmocom mpcie-breakout board and the sysmocom QMOD board we have exposed the PCM related pins on 2.54mm headers to allow for some separate board to pick up that PCM and offer it to the host system. However, such separate board hasn't been developed so far.

Syndicated 2017-09-01 22:00:00 from LaForge's home page

First actual XMOS / XCORE project

For many years I've been fascinated by the XMOS XCore architecture. It offers a surprisingly refreshing alternative virtually any other classic microcontroller architectures out there. However, despite reading a lot about it years ago, being fascinated by it, and even giving a short informal presentation about it once, I've so far never used it. Too much "real" work imposes a high barrier to spending time learning about new architectures, languages, toolchains and the like.

Introduction into XCore

Rather than having lots of fixed-purpose built-in "hard core" peripherals for interfaces such as SPI, I2C, I2S, etc. the XCore controllers have a combination of

  • I/O ports for 1/4/8/16/32 bit wide signals, with SERDES, FIFO, hardware strobe generation, etc
  • Clock blocks for using/dividing internal or external clocks
  • hardware multi-threading that presents 8 logical threads on each core
  • xCONNECT links that can be used to connect multiple processors over 2 or 5 wires per direction
  • channels as a means of communication (similar to sockets) between threads, whether on the same xCORE or a remote core via xCONNECT
  • an extended C (xC) programming language to make use of parallelism, channels and the I/O ports

In spirit, it is like a 21st century implementation of some of the concepts established first with Transputers.

My main interest in xMOS has been the flexibility that you get in implementing not-so-standard electronics interfaces. For regular I2C, UART, SPI, etc. there is of course no such need. But every so often one encounters some interface that's very rately found (like the output of an E1/T1 Line Interface Unit).

Also, quite often I run into use cases where it's simply impossible to find a microcontroller with a sufficient number of the related peripherals built-in. Try finding a microcontroller with 8 UARTs, for example. Or one with four different PCM/I2S interfaces, which all can run in different clock domains.

The existing options of solving such problems basically boil down to either implementing it in hard-wired logic (unrealistic, complex, expensive) or going to programmable logic with CPLD or FPGAs. While the latter is certainly also quite interesting, the learning curve is steep, the tools anything but easy to use and the synthesising time (and thus development cycles) long. Furthermore, your board design will be more complex as you have that FPGA/CPLD and a microcontroller, need to interface the two, etc (yes, in high-end use cases there's the Zynq, but I'm thinking of several orders of magnitude less complex designs).

Of course one can also take a "pure software" approach and go for high-speed bit-banging. There are some ARM SoCs that can toggle their pins. People have reported rates like 14 MHz being possible on a Raspberry Pi. However, when running a general-purpose OS in parallel, this kind of speed is hard to do reliably over long term, and the related software implementations are going to be anything but nice to write.

So the XCore is looking like a nice alternative for a lot of those use cases. Where you want a microcontroller with more programmability in terms of its I/O capabilities, but not go as far as to go full-on with FPGA/CPLD development in Verilog or VHDL.

My current use case

My current use case is to implement a board that can accept four independent PCM inputs (all in slave mode, i.e. clock provided by external master) and present them via USB to a host PC. The final goal is to have a board that can be combined with the sysmoQMOD and which can interface the PCM audio of four cellular modems concurrently.

While XMOS is quite strong in the Audio field and you can find existing examples and app notes for I2S and S/PDIF, I couldn't find any existing code for a PCM slave of the given requirements (short frame sync, 8kHz sample rate, 16bit samples, 2.048 MHz bit clock, MSB first).

I wanted to get a feeling how well one can implement the related PCM slave. In order to test the slave, I decided to develop the matching PCM master and run the two against each other. Despite having never written any code for XMOS before, nor having used any of the toolchain, I was able to implement the PCM master and PCM slave within something like ~6 hours, including simulation and verification. Sure, one can certainly do that in much less time, but only once you're familiar with the tools, programming environment, language, etc. I think it's not bad.

The biggest problem was that the clock phase for a clocked output port cannot be configured, i.e. the XCore insists on always clocking out a new bit at the falling edge, while my use case of course required the opposite: Clocking oout new signals at the rising edge. I had to use a second clock block to generate the inverted clock in order to achieve that goal.

Beyond that 4xPCM use case, I also have other ideas like finally putting the osmo-e1-xcvr to use by combining it with an XMOS device to build a portable E1-to-USB adapter. I have no clue if and when I'll find time for that, but if somebody wants to join in: Let me know!

The good parts

Documentation excellent

I found the various pieces of documentation extremely useful and very well written.

Fast progress

I was able to make fast progress in solving the first task using the XMOS / Xcore approach.

Soft Cores developed in public, with commit log

You can find plenty of soft cores that XMOS has been developing on github at https://github.com/xcore, including the full commit history.

This type of development is a big improvement over what most vendors of smaller microcontrollers like Atmel are doing (infrequent tar-ball code-drops without commit history). And in the case of the classic uC vendors, we're talking about drivers only. In the XMOS case it's about the entire logic of the peripheral!

You can for example see that for their I2C core, the very active commit history goes back to January 2011.

xSIM simulation extremely helpful

The xTIMEcomposer IDE (based on Eclipse) contains extensive tracing support and an extensible near cycle accurate simulator (xSIM). I've implemented a PCM mater and PCM slave in xC and was able to simulate the program while looking at the waveforms of the logic signals between those two.

The bad parts

Unfortunately, my extremely enthusiastic reception of XMOS has suffered quite a bit over time. Let me explain why:

Hard to get XCore chips

While the product portfolio on on the xMOS website looks extremely comprehensive, the vast majority of the parts is not available from stock at distributors. You won't even get samples, and lead times are 12 weeks (!). If you check at digikey, they have listed a total of 302 different XMOS controllers, but only 35 of them are in stock. USB capable are 15. With other distributors like Farnell it's even worse.

I've seen this with other semiconductor vendors before, but never to such a large extent. Sure, some packages/configurations are not standard products, but having only 11% of the portfolio actually available is pretty bad.

In such situations, where it's difficult to convince distributors to stock parts, it would be a good idea for XMOS to stock parts themselves and provide samples / low quantities directly. Not everyone is able to order large trays and/or capable to wait 12 weeks, especially during the R&D phase of a board.

Extremely limited number of single-bit ports

In the smaller / lower pin-count parts, like the XU[F]-208 series in QFN/LQFP-64, the number of usable, exposed single-bit ports is ridiculously low. Out of the total 33 I/O lines available, only 7 can be used as single-bit I/O ports. All other lines can only be used for 4-, 8-, or 16-bit ports. If you're dealing primarily with serial interfaces like I2C, SPI, I2S, UART/USART and the like, those parallel ports are of no use, and you have to go for a mechanically much larger part (like XU[F]-216 in TQFP-128) in order to have a decent number of single-bit ports exposed. Those parts also come with twice the number of cores, memory, etc- which you don't need for slow-speed serial interfaces...

Insufficient number of exposed xLINKs

The smaller parts like XU[F]-208 only have one xLINK exposed. Of what use is that? If you don't have at least two links available, you cannot daisy-chain them to each other, let alone build more complex structures like cubes (at least 3 xLINKs).

So once again you have to go to much larger packages, where you will not use 80% of the pins or resources, just to get the required number of xLINKs for interconnection.

Change to a non-FOSS License

XMOS deserved a lot of praise for releasing all their soft IP cores as Free / Open Source Software on github at https://github.com/xcore. The License has basically been a 3-clause BSD license. This was a good move, as it meant that anyone could create derivative versions, whether proprietary or FOSS, and there would be virtually no license incompatibilities with whatever code people wanted to write.

However, to my very big disappointment, more recently XMOS seems to have changed their policy on this. New soft cores (released at https://github.com/xmos as opposed to the old https://github.com/xcore) are made available under a non-free license. This license is nothing like BSD 3-clause license or any other Free Software or Open Source license. It restricts the license to use the code together with an XMOS product, requires the user to contribute fixes back to XMOS and contains references to importand export control. This license is incopatible with probably any FOSS license in existance, making it impossible to write FOSS code on XMOS while using any of the new soft cores released by XMOS.

But even beyond that license change, not even all code is provided in source code format anymore. The new USB library (lib_usb) is provided as binary-only library, for example.

If you know anyone at XMOS management or XMOS legal with whom I could raise this topic of license change when transitioning from older sc_* software to later lib_* code, I would appreciate this a lot.

Proprietary Compiler

While a lot of the toolchain and IDE is based on open source (Eclipse, LLVM, ...), the actual xC compiler is proprietary.

Syndicated 2017-09-01 22:00:00 from LaForge's home page

Osmocom jenkins test suite execution

Automatic Testing in Osmocom

So far, in many Osmocom projects we have unit tests next to the code. Those unit tests are executing test on a per-C-function basis, and typically use the respective function directly from a small test program, executed at make check time. The actual main program (like OsmoBSC or OsmoBTS) is not executed at that time.

We also have VTY testing, which specifically tests that the VTY has proper documentation for all nodes of all commands.

Then there's a big gap, and we have osmo-gsm-tester for testing a full cellular network end-to-end. It includes physical GSM modesm, coaxial distribution network, attenuators, splitter/combiners, real BTS hardware and logic to run the full network, from OsmoBTS to the core - both for OsmoNITB and OsmoMSC+OsmoHLR based networks.

However, I think a lot of testing falls somewhere in between, where you want to run the program-under-test (e.g. OsmoBSC), but you don't want to run the MS, BTS and MSC that normally surroudns it. You want to test it by emulating the BTS on the Abis sid and the MSC on the A side, and just test Abis and A interface transactions.

For this kind of testing, I have recently started to investigate available options and tools.

OsmoSTP (M3UA/SUA)

Several months ago, during the development of OsmoSTP, I disovered that the Network Programming Lab of M√ľnster University of Applied Sciences led by Michael Tuexen had released implementations of the ETSI test suite for the M3UA and SUA members of the SIGTRAN protocol family.

The somewhat difficult part is that they are implemented in scheme, using the guile interpreter/compiler, as well as a C-language based execution wrapper, which then is again called by another guile wrapper script.

I've reimplemented the test executor in python and added JUnitXML output to it. This means it can feed the test results directly into Jenkins.

I've also cleaned up the Dockerfiles and related image generation for the osmo-stp-master, m3ua-test and sua-test images, as well as some scripts to actually execute them on one of the Builders. You can find related Dockerfiles as well as associtaed Makfiles in http://git.osmocom.org/docker-playground

The end result after integration with Osmocom jenkins can be seen in the following examples on jenkins.osmocom.org for M3UA and for SUA

Triggering the builds is currently periodic once per night, but we could of course also trigger them automatically at some later point.

OpenGGSN (GTP)

For OpenGGSN, during the development of IPv6 PDP context support, I wrote some test infrastructure and test cases in TTCN-3. Those test cases can be found at http://git.osmocom.org/osmo-ttcn3-hacks/tree/ggsn_tests

I've also packaged the GGSN and the test cases each into separate Docker containers called osmo-ggsn-latest and ggsn-test. Related Dockerfiles and Makefiles can again be found in http://git.osmocom.org/docker-playground - together with a Eclipse TITAN Docker base image using Debian Stretch called debian-stretch-titan

Using those TTCN-3 test cases with the TITAN JUnitXML logger plugin we can again integrate the results directly into Jenkins, whose results you can see at https://jenkins.osmocom.org/jenkins/view/TTCN3/job/ttcn3-ggsn-test/14/testReport/(root)/GGSN_Tests/

Further Work

I've built some infrastructure for Gb (NS/BSSGP), VirtualUm and other testing, but yet have to build Docker images and related jenkins integration for it. Stay tuned about that. Also, lots more actual tests cases are required. I'm very much looking forward to any contributions.

Syndicated 2017-08-19 22:00:00 from LaForge's home page

IPv6 User Plane support in Osmocom

Preface

Cellular systems ever since GPRS are using a tunnel based architecture to provide IP connectivity to cellular terminals such as phones, modems, M2M/IoT devices and the like. The MS/UE establishes a PDP context between itself and the GGSN on the other end of the cellular network. The GGSN then is the first IP-level router, and the entire cellular network is abstracted away from the User-IP point of view.

This architecture didn't change with EGPRS, and not with UMTS, HSxPA and even survived conceptually in LTE/4G.

While the concept of a PDP context / tunnel exists to de-couple the transport layer from the structure and type of data inside the tunneled data, the primary user plane so far has been IPv4.

In Osmocom, we made sure that there are no impairments / assumptions about the contents of the tunnel, so OsmoPCU and OsmoSGSN do not care at all what bits and bytes are transmitted in the tunnel.

The only Osmocom component dealing with the type of tunnel and its payload structure is OpenGGSN. The GGSN must allocate the address/prefix assigned to each individual MS/UE, perform routing between the external IP network and the cellular network and hence is at the heart of this. Sadly, OpenGGSN was an abandoned project for many years until Osmocom adopted it, and it only implemented IPv4.

This is actually a big surprise to me. Many of the users of the Osmocom stack are from the IT security area. They use the Osmocom stack to test mobile phones for vulnerabilities, analyze mobile malware and the like. As any penetration tester should be interested in analyzing all of the attack surface exposed by a given device-under-test, I would have assumed that testing just on IPv4 would be insufficient and over the past 9 years, somebody should have come around and implemented the missing bits for IPv6 so they can test on IPv6, too.

In reality, it seems nobody appears to have shared line of thinking and invested a bit of time in growing the tools used. Or if they did, they didn't share the related code.

In June 2017, Gerrie Roos submitted a patch for OpenGGSN IPv6 support that raised hopes about soon being able to close that gap. However, at closer sight it turns out that the code was written against a more than 7 years old version of OpenGGSN, and it seems to primarily focus on IPv6 on the outer (transport) layer, rather than on the inner (user) layer.

OpenGGSN IPv6 PDP Context Support

So in July 2017, I started to work on IPv6 PDP support in OpenGGSN.

Initially I thought How hard can it be? It's not like IPv6 is new to me (I joined 6bone under 3ffe prefixes back in the 1990ies and worked on IPv6 support in ip6tables ages ago. And aside from allocating/matching longer addresses, what kind of complexity does one expect?

After my initial attempt of implementation, partially mislead by the patch that was contributed against that 2010-or-older version of OpenGGSN, I'm surprised how wrong I was.

In IPv4 PDP contexts, the process of establishing a PDP context is simple:

  • Request establishment of a PDP context, set the type to IETF IPv4
  • Receive an allocated IPv4 End User Address
  • Optionally use IPCP (part of PPP) to reques and receive DNS Server IP addresses

So I implemented the identical approach for IPv6. Maintain a pool of IPv6 addresses, allocate one, and use IPCP for DNS. And nothing worked.

  • IPv6 PDP contexts assign a /64 prefix, not a single address or a smaller prefix
  • The End User Address that's part of the Signalling plane of Layer 3 Session Management and GTP is not the actual address, but just serves to generate the interface identifier portion of a link-local IPv6 address
  • IPv6 stateless autoconfiguration is used with this link-local IPv6 address inside the User Plane, after the control plane signaling to establish the PDP context has completed. This means the GGSN needs to parse ICMPv6 router solicitations and generate ICMPV6 router advertisements.

To make things worse, the stateless autoconfiguration is modified in some subtle ways to make it different from the normal SLAAC used on Ethernet and other media:

  • the timers / lifetimes are different
  • only one prefix is permitted
  • only a prefix length of 64 is permitted

A few days later I implemented all of that, but it still didn't work. The problem was with DNS server adresses. In IPv4, the 3GPP protocols simply tunnel IPCP frames for this. This makes a lot of sense, as IPCP is designed for point-to-point interfaces, and this is exactly what a PDP context is.

In IPv6, the corresponding IP6CP protocol does not have the capability to provision DNS server addresses to a PPP client. WTF? The IETF seriously requires implementations to do DHCPv6 over PPP, after establishing a point-to-point connection, only to get DNS server information?!? Some people suggested an IETF draft to change this butthe draft has expired in 2011 and we're still stuck.

While 3GPP permits the use of DHCPv6 in some scenarios, support in phones/modems for it is not mandatory. Rather, the 3GPP has come up with their own mechanism on how to communicate DNS server IPv6 addresses during PDP context activation: The use of containers as part of the PCO Information Element used in L3-SM and GTP (see Section 10.5.6.3 of 3GPP TS 24.008. They by the way also specified the same mechanism for IPv4, so there's now two competing methods on how to provision IPv4 DNS server information: IPCP and the new method.

In any case, after some more hacking, OpenGGSN can now also provide DNS server information to the MS/UE. And once that was implemented, I had actual live uesr IPv6 data over a full Osmocom cellular stack!

Summary

We now have working IPv6 User IP in OpenGGSN. Together with the rest of the Osmocom stack you can operate a private GPRS, EGPRS, UMTS or HSPA network that provide end-to-end transparent, routed IPv6 connectivity to mobile devices.

All in all, it took much longer than nneeded, and the following questions remain in my mind:

  • why did the IETF not specify IP6CP capabilities to configure DNS servers?
  • why the complex two-stage address configuration with PDP EUA allocation for the link-local address first and then stateless autoconfiguration?
  • why don't we simply allocate the entire prefix via the End User Address information element on the signaling plane? For sure next to the 16byte address we could have put one byte for prefix-length?
  • why do I see duplication detection flavour neighbour solicitations from Qualcomm based phones on what is a point-to-point link with exactly two devices: The UE and the GGSN?
  • why do I see link-layer source address options inside the ICMPv6 neighbor and router solicitation from mobile phones, when that option is specifically not to be used on point-to-point links?
  • why is the smallest prefix that can be allocated a /64? That's such a waste for a point-to-point link with a single device on the other end, and in times of billions of connected IoT devices it will just encourage the use of non-public IPv6 space (i.e. SNAT/MASQUERADING) while wasting large parts of the address space

Some of those choices would have made sense if one would have made it fully compatible with normal IPv6 like e.g. on Ethernet. But implementing ICMPv6 router and neighbor solicitation without getting any benefit such as ability to have multiple prefixes, prefixes of different lengths, I just don't understand why anyone ever thought You can find the code at http://git.osmocom.org/openggsn/log/?h=laforge/ipv6 and the related ticket at https://osmocom.org/issues/2418

Syndicated 2017-08-08 22:00:00 from LaForge's home page

Virtual Um interface between OsmoBTS and OsmocomBB

During the last couple of days, I've been working on completing, cleaning up and merging a Virtual Um interface (i.e. virtual radio layer) between OsmoBTS and OsmocomBB. After I started with the implementation and left it in an early stage in January 2016, Sebastian Stumpf has been completing it around early 2017, with now some subsequent fixes and improvements by me. The combined result allows us to run a complete GSM network with 1-N BTSs and 1-M MSs without any actual radio hardware, which is of course excellent for all kinds of testing scenarios.

The Virtual Um layer is based on sending L2 frames (blocks) encapsulated via GSMTAP UDP multicast packets. There are two separate multicast groups, one for uplink and one for downlink. The multicast nature simulates the shared medium and enables any simulated phone to receive the signal from multiple BTSs via the downlink multicast group.

/images/osmocom-virtum.png

In OsmoBTS, this is implemented via the new osmo-bts-virtual BTS model.

In OsmocomBB, this is realized by adding virtphy virtual L1, which speaks the same L1CTL protocol that is used between the real OsmcoomBB Layer1 and the Layer2/3 programs such as mobile and the like.

Now many people would argue that GSM without the radio and actual handsets is no fun. I tend to agree, as I'm a hardware person at heart and I am not a big fan of simulation.

Nevertheless, this forms the basis of all kinds of possibilities for automatized (regression) testing in a way and for layers/interfaces that osmo-gsm-tester cannot cover as it uses a black-box proprietary mobile phone (modem). It is also pretty useful if you're traveling a lot and don't want to carry around a BTS and phones all the time, or get some development done in airplanes or other places where operating a radio transmitter is not really a (viable) option.

If you're curious and want to give it a shot, I've put together some setup instructions at the Virtual Um page of the Osmocom Wiki.

Syndicated 2017-07-18 22:00:00 from LaForge's home page

Ten years after first shipping Openmoko Neo1973

Exactly 10 years ago, on July 9th, 2007 we started to sell+ship the first Openmoko Neo1973. To be more precise, the webshop actually opened a few hours early, depending on your time zone. Sean announced the availability in this mailing list post

I don't really have to add much to my ten years [of starting to work on] Openmoko anniversary blog post a year ago, but still thought it's worth while to point out the tenth anniversary.

It was exciting times, and there was a lot of pioneering spirit: Building a Linux based smartphone with a 100% FOSS software stack on the application processor, including all drivers, userland, applications - at a time before Android was known or announced. As history shows, we'd been working in parallel with Apple on the iPhone, and Google on Android. Of course there's little chance that a small taiwanese company can compete with the endless resources of the big industry giants, and the many Neo1973 delays meant we had missed the window of opportunity to be the first on the market.

It's sad that Openmoko (or similar projects) have not survived even as a special-interest project for FOSS enthusiasts. Today, virtually all options of smartphones are encumbered with way more proprietary blobs than we could ever imagine back then.

In any case, the tenth anniversary of trying to change the amount of Free Softwware in the smartphone world is worth some celebration. I'm reaching out to old friends and colleagues, and I guess we'll have somewhat of a celebration party both in Germany and in Taiwan (where I'll be for my holidays from mid-September to mid-October).

Syndicated 2017-07-09 14:00:00 from LaForge's home page

312 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!