Beware BrickerBot, the IoT Killer

Download this article as a .PDF

IoT security (or the lack thereof) seems to pop up in the news a lot these days. Our Embedded Revolution reader survey and whitepaper highlights what developers think about this topic (Figs. 1 and 2).

IoT devices and PCs are often compromised to employ these “bots” in a massive, distributed denial of service (DDoS) attack. Of course, a compromised device can also be used for other nefarious means. The problem is many fold from the starting point of a bad design to the inability users to take corrective measures, as vendors are often the only ones that can provide relief from these attacks.

Enter BrickerBot, which was exposed by the security firm Radware. BrickerBot is a form of malware that is designed to disable or “brick” an IoT device that it can compromise. Essentially a bricked device is about as useful as a real brick. Devices normally require replacement or more advanced update techniques like direct JTAG connections. This permanent denial-of-service (PDoS) is supposed to be “good” for the community since—in theory, and according to the author—it removes the device from the internet and prevents it from being used in a DDoS attack or for other unwanted purposes, from spying with cameras to capturing security information.

Of course, BrickerBot works like existing malware. It hides itself and uses distributed means to hide itself and its management servers. The author is unknown and any alternative use of the compromised devices is unknown, although they appear to be bricked.

So, is this variant on Robin Hood, Zorro, or Batman? A vigilante who remains hidden, but does good?

In a sense, removing a device in this fashion may protect some from possible future attacks by another piece malware running on the device, or from having the device used for other means. Unfortunately, it would actually be very difficult to notify the owner of the device by means other than bricking it. Likewise, the owner of the device cannot know that it was bricked. The device will just stop working, usually to be replaced by another of the same type (and most likely with the same problems.

We in the embedded community may know what’s going on, but the general public is unlikely to associate their disabled devices with a BrickerBot attack. Of course, they wouldn’t know if their device was compromised by another piece of malware.

Attacking a device is against the law. So is trespassing in a building to lock the front door. On the other hand, this type of attack would be more like filling a building with concrete to prevent it from being used.

There is also the issue of what such an attack is actually doing. From one side, it is just preventing others from compromising the device by making it unusable. That is only annoying for something like a wireless speaker or smart lightbulb, but it could be devastating for a security system that all of a sudden loses all its cameras. I won’t even get into medical or other safety-related devices. In theory, these attacks will be looking at what kinds of devices they are bricking but that only holds if the device is easily identifiable, or if the programmer actually takes the time to discern the device and its importance. The likelihood of severe consequences like death are low at this point, but grow as the number of IoT devices and their uses increase.

This malware does bring up the issue of controlling IoT devices, and whether they should be required to have alternate means of updates. Most are locked down by the vendor, but there is no requirement to support updates. Most rarely provide them past a short device half-life. Some never provide updates even though they are possible.

PDOS is only another in a string of attacks on an ever-growing attack surface of IoT devices. Those movie scenarios where the villain takes over a city grid will look just as bad if a PDOS attack shuts down all the stoplights. 

 

http://ift.tt/2pjHgOk


http://ift.tt/2poPvuE

Connecting Things Instead of People: The Unexpected Evolution of LTE

I thought I was the coolest college student in 2004, when I could check my e-mail during lectures with my Nokia 3650 mobile phone. However, GPRS network coverage was spotty in rural State College, Pennsylvania, and I don’t believe I ever experienced the 40-kb/s data rate I was promised. Fifteen years later, the behavioral patterns of you, I, and seven billion other mobile-phone users have largely driven today’s cellular technology to deliver higher data rates with today’s LTE-Advanced Pro and tomorrow’s 5G technologies.

A recent study by Juniper Research estimates that average smartphone consumption will rise from 2 GB per month in 2017 to 5 GB per month by 2021.  We are the very reason why recent cellular modems like Qualcomm’s Snapdragon X20 and Intel’s XMN 7560 are designed to support up to 1-Gb/s downlink speeds using a combination of MIMO, carrier aggregation, and the 256-QAM modulation scheme.

However, applications like machine-type communication (i.e., the Internet of Things) are driving mobile standards in a completely different direction. These devices require lower power consumption and data rates with expectations of lower cost and higher reliability.

Today, several 3GPP technologies are evolving to address use cases that connect “things” instead of people—and with features that are very different than what we have grown accustomed to expect from the evolution of LTE. More specifically, some of the up-and-coming 3GPP technologies include LTE for machine-type communication (LTE-M), narrowband IoT (NB-IoT), and LTE Vehicle-to-Everything (LTE V2X).

LTE CAT-M1 and NB-IoT

LTE-M and NB-IoT are part of the LTE family. They’re designed to serve a wide range of low-power and low-data-rate devices as part of wide-area networks. LTE-M started as a simplification of the LTE radio and defined a new category 0 (LTE CAT-0) device in 3GPP Release 12.

1.  NB-IoT can be deployed either adjacent to existing LTE transmissions or within unused resource blocks.

Today, in 3GPP Release 13 and 14, LTE-M is specifically referred to as LTE CAT-M1. These devices use half-duplex radios, transmit at lower output power (+20 dBm instead of +23 dBm), and are only required to support the 1.4-MHz LTE bandwidth configuration. In addition, the standard features an extended “discontinuous reception cycle,” which allows devices to extend battery life by sleeping for up to 40 minutes between transmissions. The target for LTE CAT-M1 devices is to consume approximately one-fifth of the power required by today’s most basic “traditional” LTE radios.

In parallel with LTE CAT-M1, another 3GPP technology called NB-IoT is a slightly longer-distance and lower-complexity cousin (Fig. 1). Although similar to LTE CAT-M1 in many aspects, one notable difference in NB-IoT is that it further reduces the transmission bandwidth to only 180 kHz, allowing for operation in reclaimed GSM spectrum and unused LTE resource blocks. Moreover, NB-IoT reduces the modulation complexity and only allows simple phase-shift-keyed (PSK) modulation schemes (QPSK, p/2-BPSK, p/4-QPSK).  An important design attribute of NB-IoT is the improved link budget, which is several dB better than that of LTE-M and is 20 dB higher than legacy GPRS technology.

Cellular V2X vs. WAVE/802.11p/DRSC

The third emerging mobile technology that’s evolving to better connect “things” is LTE V2X, often referred to by the umbrella term “Cellular V2X.” Although the origins of LTE V2X started with LTE direct features for device-to-device communication in 3GPP Release 12, the standard will formally release as part of 3GPP Release 14.

LTE V2X is designed as an alternative to existing Wireless Access in Vehicular Environments (WAVE) technology (also known as also known as Direct Short Range Communications) based on the IEEE 802.11p. Unlike WAVE/802.11p/DRSC, LTE V2X allows for both direct vehicle-to-vehicle (V2V) communications and vehicle-to-network (V2N) communications (Fig. 2). In V2V mode, vehicles can tolerate relative velocity differences of up to 500 km/hr at a range of up to 450 meters.

2. LTE V2X supports both vehicle-to-vehicle (V2V) and vehicle-to-network (V2N) communications.

Today, the subject of 802.11p versus LTE V2X has sparked significant debate in the automotive industry, with each camp touting the merits of one technology over the other. In the end, one technology is likely to dominate, but it’s not clear which one that is just yet.

It should come as no surprise that wireless technologies continue to evolve at a pace that only seems to accelerate over time. Although the rapid rate of change is exciting for us as consumers, it creates significant challenges for engineers in the wireless industry.

At both the IC and device level, the introduction of new wireless technologies can add significant design complexity and test cost. As a result, engineers must continually find new methods to lower their cost of test with smarter test systems. With price of targets less than $5 for NB-IoT and LTE-M1 radios, along with wireless standards that change annually, the industry will continue to adopt flexible software-defined test equipment and new test approaches like parallel test.

http://ift.tt/2pjKpNU


http://ift.tt/2qiqtKU

Compact FPGAs Are Becoming Ubiquitous

Download this article as a .PDF

When I was involved in mainframe design, we dealt with logic gates that would be packed a few dozen per chip. Comparable designs now occur within ASICs and FPGAs. These days, designers turn to off-the-shelf, high-performance microcontrollers because they assume that FPGAs are too power-hungry, expensive, and hard to program. This is especially true for embedded FPGAs that are integrated with custom IP.

FPGA development tools and libraries have improved significantly, providing even novice designers with the ability to construct FPGA designs. Flash FPGAs provide instant-on as well as low power operation.

The high end of the FPGA spectrum incorporates very-high-speed SERDES and millions of lookup tables (LUT), the building block of FPGAs. At the opposite end of the spectrum are tiny FPGAs like Lattice Semiconductor’s iCE40 UltraPlus that comes in a 1.4 mm × 1.4 mm × 0.45 mm WLCSP package or conventional packages starting at 2.15-mm by 2.55-mm QFN (see figure). The flash-based FPGA has up to 5,000 LUTs, 8 DSP blocks and 1.1 Mbits of SRAM. Versions also include a MIPI-I3C interface for low-resolution, always-on camera applications. It uses under 100 µW of standby power.

The Microsemi IGLOO and Intel/Altera Max 10 are also flash-based FPGA families that come in compact packages. The Microsemi IGLOO family has up to 35K LUTs and comes in packages as small as 3-mm by 3-mm. The IGLOO/e includes a license for a soft-core Cortex-M1. The IGLOO nano uses only 2 µW. IGLOO’s Flash*Freeze mode shuts down the system while preserving SRAM and register contents. Entering and exiting this mode takes less than 1 µs.

The Max 10 is also available in a 3-mm by 3-mm package. It also has analog blocks, DSP blocks, and external DDR3 interfaces. Versions are available with up to 736 Kbytes of flash memory for soft-core NIOS II processor object code.

Xilinx’s Spartan 7 is a RAM-based FPGA that is available in an 8-mm by 8-mm CPBGA package at the low end of the family. It contains 6000 LUTs and can support a MicroBlaze soft-core processor.

FPGA Soft Core Options

Soft core processors like the ARM Cortex-M1, NIOS II, and MicroBlaze have some company these days. The RISC-V is available for FPGAs. It is possible to use the Rocket chip generator or take advantage of the PicoRV32 or Orca designs that target FPGAs.

The number of LUTs needed for a 32-bit soft-core processor vary from FPGA to FPGA, as well as between core designs and associated features like memory management units. Compact platforms fit in 600 to 700 LUTs, while higher end versions run about 2,000 to 3,000 LUTs. This is still a fraction of most FPGAs, even at the low end of the spectrum, leaving plenty of headroom for custom logic or implementing other peripherals like serial ports.

FPGA Advantages

FPGAs, even small ones, provide a number of advantages over conventional microcontrollers. FPGAs can be more flexible since they can be reprogrammed. Logic tends to be more power efficient and faster than software solutions. Designs are less prone to being copied, and some FPGA implementations go to extremes to help prevent analysis of the design.

FPGAs can also provide low-power and possibly lower-cost solutions because of the level of customization available and the efficiencies of an FGPA implementation. This is especially true where the quantity needed is insufficient to warrant an ASIC design that normally requires a significant up-front investment and large quantities to be economical. 

In many cases, an FPGA approach allows single-chip solutions, whereas a microcontroller or microprocessor design would require additional chips. Electronic Design’s Embedded Revolution survey indicates that many designers are investigating deep neural networks (DNN) and artificial intelligence (AI) applications. The flexible design possible with an FPGA can support DNNs that often need only a few bits of information, allowing a small FPGA to be used while also benefitting from the parallel nature of an FPGA.

 

http://ift.tt/2poSGlX

Open-Source Textbooks Need to be Configurable

Download this article as a .PDF Link

College textbook costs are high. The Maryland Open Source Textbook (MOST) Initiative (see figure) looks to reduce those expenses, but will it be enough? Being an engineer and programmer with a bit of web experience, I think it’s a valiant effort—but one that will fall short unless the delivery mechanism changes.

MOST was started back in 2013 as collaboration between the University System of Maryland Student Council (USMSC) and the System’s Center for Academic Innovation (CAI). The latest news from them is the announcement of a MOST mini-grant program that provides monetary support for generating open-source content. The mini-grants are a couple thousand dollars and only provide an incentive to increase adoption in “high impact, high enrollment courses for which high-quality OER already exists.”

Open-source documents are often used in Massive Open Online Courses (MOOCs) that are both open- and closed-source systems. Most MOOC courses have minimal fees, typically to cover management or to provide materials or testing services. Many courses are completely free. MOOCs are often supported and used by universities that have opened their courses and courseware to the public. Some are used to provide online courses and are often used as part of a curriculum for degrees.

Content repositories are spread throughout the internet. The University of Cambridge has some links for finding this content; this includes platforms like Google Scholar. The Digital Repository at the University of Maryland (DRUM) is another resource where scholarly works can be posted by the contributor for access by the public.

Also of note is Khan Academy, which provides free online courses. This is tailored more for delivery of interactive content packaged by subject, ranging from math to the humanities. It targets middle and high school students.

Another source of content and training materials for the embedded space are vendors that provide free access to app notes and tools, although these days you need to give up at least your name, e-mail address, and phone number. Many also have training materials, including videos. Of course, these are targeting their products, but that’s what customers (even students) usually need. Much of the change in this arena is targeting the emerging maker and maker pro space, where users are still learning what is available and how it works, both to create new products and evolve new ideas.

The Problem

There are a host of issues related to open-source content, MOOCs, and so on. Teachers may not want to use them in place of their own teaching methods, and finding suitable content is sometimes an issue. Searching for a content match for a particular course isn’t as easy as you might think.

One major problem I see is the delivery and use of static content like e-books. This includes PDFs and even slideshows. The problem is that these are static, packaged items—often with limited annotation tools—designed to be given from a provider (usually a teacher) to a consumer (the student). This assumes that the content is suitable as is and that the provider or student cannot or should not modify it in any fashion.

Certain topics change quickly. This means that content in a static container will often be dated, though perhaps only in fringe areas. This is why major books have multiple revisions with only minor changes to the content. For printed material, teachers are forced to wait for new revisions providing their own material to complement the current or available edition.

The migration from printed textbooks to e-books has seen only minimal benefit from the hypertext underpinnings of the hypertext internet. Hyperlinks within an e-book are a step up from the textual references an indices found in a printed book, but it is surprising how limited this step was compared to what is really needed and could be implemented.

A Solution

What is needed is an e-book format that provides services like alternate presentation streams, including multiple tables of contents (TOCs), a journaling system for tracking changes, and tools to provide extraction and migration of structural annotations. These would be in addition to annotation tools. While definitely a tall order, it’s one that could be built on existing e-book technology like the EPUB format, which is already built on HTML files.

There are a host of issues that need to be addressed for this type of environment, but ones that already have solutions. For example, every page or group of pages within an e-book would need a universally unique identifier (UUID) so changes could be tracked between documents. This would allow a teacher to edit their copy of an e-book and export changes that could be merged with a student’s document.

The added content could include information the teacher created or copied from other e-books. This is actually a key item, because the copied information would have identifiable information about where it came from and who provided it. This might even be something that gets linked to a blockchain tracking system.

The approach would also provide a way for an e-book author to provide updates. The journaling system would be used to track changes and provide rollback capabilities.

I admit that this is a holy grail, and one that is such a major project that most will forgo even looking at such an approach. Nevertheless, it’s one that could gain support in the commercial space, where most successful open-source projects gain traction. Documentation, training, and dissemination of content has been relegated to a web that is ill prepared to provide more than the current incarnation of static content in forms that can be downloaded and utilized. The possibilities in terms of advertising, support, and reduced maintenance costs are significant. It could even be integrated into training and testing regimens.

I would use it as a research tool as well, since this could be used for distributing curated content. It would be an effective way to provide downloaded content from a website that would be integrated into a document, which would then be distributed. The downloads might include additional material that might be useful for some, such as teachers or purchasing agents, while being placed in an alternate TOC and presentation stream.

http://ift.tt/2qi1lnl