projectM Music Visualizer Status Update

As I’ve ended up with de facto maintainership of the illustrious projectM open source music visualizer I’ve seen a fair bit of interest in the project. I think I at least owe a blog post to update folks on where it’s at, what needs working on, and how to help make it better.

 

What is projectM?

projectM is a music visualizer program. In short it makes cool animations that are synchronized and reactive to any music input. I say music and not audio because it includes beat detection for making interesting things happen on the beat.

Screen Shot 2014-08-25 at 12.31.07 AM

History

Some of you may remember the old windows mp3 player WinAmp. It contained a supremely amazing and innovative music visualizer called Milkdrop written by a gentleman from nVidia named Ryan Geiss, known just as Geiss. The visualizer was not a single set of rules for visualizing audio but rather a mathematical interpreter that would read in “preset” files which were sets of equations. You can read the very illuminating description here of how the files are defined if you’re interested. In short there is a set of per-frame equations describing colors and FFT waveforms and simple transformations, and there is a set of per-vertex equations for more detailed transformations and deformations.

Due to the popularity of WinAmp and Milkdrop there have been many thousands of presets authored and shared with really stunning and innovative visual effects ranging from animated fractals to dancing stick figures to bizarre abstract soups. The files are often named things like:

  • shifter – cellular_Phat_YAK_Infusion_v2.milk
  • [dylan] cube in a room -no effects – code is very messy nz+ finally some serious stfu (loavthe).milk
  • NeW Adam Master Mashup FX 2 Zylot – In death there is life (Dancing Lights mix)+ Tumbling Cubes 3d.milk
  • suksma + aderassi geiss – the sick assumptions you make about my car [shifter’s esc shader] nz+.milk
  • flexi + cope – i blew you a soap bubble now what – feel the projection you are, connected to it all nz+ wrepwrimindloss w8.milk

And so on.

Screen Shot 2014-07-18 at 2.15.36 PM

As I understand it, possibly incorrectly, there were two major problems with Milkdrop. First that it was implemented with DirectX, win32 APIs and assembler, and secondly that it was not open source (though it was made open source fairly recently). So some enterprising folks in 2003 created projectM as an open source reimplementation that would be Milkdrop preset-compatible.

I didn’t work on projectM originally and I am not responsible for the vast majority of it. However the previous authors and contributors have for whatever reason mostly abandoned the project so it was left to random people to make it work. The code is quite old although the core Milkdrop preset parsing, beat detection, most of the OpenGL (more on that later) calls, and rendering is in fine shape. projectM is really just a library though, designed to be used by applications. In the past there have been XMMS and VLC plugins, a Qt application, pulseaudio and jack-based applications, and more.

 

OSX iTunes Plugin

Not really having a good solution for OSX I went ahead and ported the ancient iTunes visualizer code to work on a then-modern version of iTunes and voila! projectM on OSX. Though I did have to deal with the very unfortunate Objective-C++ “language” to make it work. Not Objective-C, Objective-C++. No I didn’t know that existed either.

Screen Shot 2014-08-25 at 12.33.50 AM

I tried to submit the plugin to the Mac App store as a free download. Not to make money or anything, just to make it easy for people to get it. The unpleasantness of this experience with Apple and their rejection is actually what spurred me to start this blog so I could complain about it.

Much to my, and apparently a number of other people’s dismay, a very recent version of iTunes or macOS caused the iTunes visualizer to stop working as well as it did. It appears to be related to drawing and subviews in the plugin.

 

Cross-Platform Standalone Application

I decided that what would be better is a cross-platform standalone application that simply listens to audio input and visualizes it. This dream was made possible by a very recent addition to the venerable cross-platform libsdl2 media library adding support for audio capture. I quickly hacked together a passable but very basic SDL2-based application that runs on Linux and macOS and in theory windows and other platforms as well. Some work needs to be done to add key commands, text overlays (preset name, help, etc), better fullscreen support and easy selection of which audio input device to use.

The main application code demonstrates how simple libprojectM is to use. All one must do is set up an OpenGL rendering context, set some configuration settings, and start feeding in audio PCM data to the projectM instance. It automatically performs beat detection and drawing to the current OpenGL context. It’s really ideal for being integrated into other applications and I hope people continue to do so.

Screen Shot 2018-02-18 at 20.49.03.png

You can obtain source, OSX and linux builds from the releases page. This is super crappy and experimental and needed some configuration tuning to make it look good, and you need to drop the presets folder in. But it’s a start.

Build System

In their infinite wisdom the original authors chose the cmake build system. After wasting many hours of my life I will not get back and almost giving up on the software profession altogether I decided it would be easier to switch to GNU autotools, the same build system almost all other open source projects use, than to deal with cmake’s bullshit. So now it uses autotools (aka the “./configure && make && make install” system everyone knows and loves).

 

Needed Efforts

This is where you come in. If you like music visualizers and want to help the software achieve greater things there is some work to be done modernizing it.

The most important task by far is getting rid of the OpenGL immediate-mode calls and replacing them with vertex buffer object instructions. VBO is a “new” (not new at all) way of doing things that involves creating a chunk of memory containing vertices and pushing it to the GPU so it can decide how and when to render your triangles. The old-school way was “immediate mode” where you would tell OpenGL things like glBegin(GL_QUADS) (“I’m going to give you a sequence of vertices for quadrilaterals”) and give it vertices one at a time. This is tremendously inefficient and slow so it isn’t supported on the newer OpenGL ES which is what any embedded device (like a phone or raspberry pi) supports, as well as WebGL.

I believe that projectM would be most awesome as a hardware device with an audio input and an HDMI output, but making a reasonably-sized and -priced solution would mean using an embedded device. It would be great to have a web application (I attempted to do this with Emscripten, a JavaScript backend for llvm) but that requires WebGL. Having an open source app for Android and iOS would be amazing. All of this requires the small number of existing immediate-mode calls to be updated to use VBOs instead. Somebody who knows more about this stuff or has more time than me should do it. There aren’t a lot of places in the code where they are used; see this document.

Astute readers may note that there already are iOS and Android projectM apps. They are made by one of the old developers who has made the decision to not share his modern OpenGL modifications with the project because he makes money off of them.

rms3.png
why the fuck hoarders don’t share their code back

Another similar effort is to replace the very old dependency on the nVidia Cg framework for enabling shaders. Cg was used because it matches Directx’s shader syntax. GLSL, the standard OpenGL shader language is not the same, and requires manual conversion of the shaders in each preset.

The Cg framework has been deprecated and unsupported for many years and work needs to be done to use the built-in GLSL compilation calls instead of Cg and convert the preset shaders. I already did some work on this but it’s far from finished.

 

The Community

The reason I’m writing this blog post is because of the community interest in the project. People do send pull requests and file issues, and we definitely could use more folks involved. I am busy with work and can’t spend time on it right now but I’m more than happy to guide and help out anyone wishing to contribute. We got an official IRC channel on irc.freenode.net #projectm so feel free to hang around there and ask any questions you have. Or just start making changes and send PRs.

Open-source Trusted Computing for IoT

(Originally posted on the Linux Weekly News)

At this year’s FOSDEM in Brussels, Jan Tobias Mühlberg gave a talk on the latest work on Sancus, a project that was originally presented at the USENIX Security Symposium in 2013. The project is a fully open-source hardware platform to support “trusted computing” and other security functionality. It is designed to be used for internet of things (IoT) devices, automotive applications, critical infrastructure, and other embedded devices where trusted code is expected to be run.

IMG_1125

A common security practice for some time now has been to sign executables to ensure that only the expected code is running on a system and to prevent software that is not trusted from being loaded and executed. Sancus is an architecture for trusted embedded computing that enables local and remote attestation of signed software, safe and secure storage of secrets such as encryption keys and certificates, and isolation of memory regions between software modules. In addition to the technical specification [PDF], the project also has a working implementation of code and hardware consisting of compiler modifications, additions to the hardware description language for a microcontroller to add functionality to the processor, a simulator, header files, and assorted tools to tie everything together.

Many people are already familiar with code signing; by default, smartphones won’t install apps that haven’t been approved by the vendor (i.e. Apple or Google) because each app must be submitted for approval and then signed using a key that is shipped pre-installed on every phone. Similarly, many computers support mechanisms like ARM TrustZone or UEFI Secure Boot that are designed to prevent hardware rootkits at the bootloader level. In practice, some of those technologies have been used to restrict computers to boot only Microsoft Windows or Google Chrome OS, though there are ways to disable the enforcement for most hardware.

In somewhat of a contrast to more proprietary schemes that some argue restrict the freedom of end-users, the Sancus project is a completely open-source design built explicitly on open-source hardware, libraries, operating systems, crypto, and compilers. It can be used, if desired, in specialized contexts where it is of critical importance that trusted code runs in isolation, on say an automobile braking actuator attached to a controller area network bus, or a smart grid system such as the type that was hacked in Ukraine during the attack by Russia. These are the opposite of general-purpose devices; instead, one specific function must be performed and integrity and isolation are critical.

The problem is that many medical devices, automotive controllers, industrial controllers, and similar sensitive embedded systems are made up of limited microcontrollers that may have software modules from different vendors. Misbehaving or malicious software can interfere in the operation of those other modules, expose or steal secrets, and compromise the integrity of the system. Integrity checks based in software are bypassed relatively easily compared to gate-level hardware checking; those checks also add considerable overhead and non-deterministic performance behavior.

Sancus 2.0 extends the openMSP430 16-bit microcontroller with a small and efficient set of strong security primitives, weighing in at under 1,500 lines of Verilog code and increasing power consumption by about 6%, according to Mühlberg. It can disallow jumps to undeclared entry points, provide memory isolation, and attestation for software modules.

Besides providing a key hierarchy and chain of trust for loading software modules, Sancus has a simple metadata descriptor for each module that stores the .text and .data ranges in memory; it then ensures that a .data section is inaccessible unless the program counter is in the .text range of the appropriate module. This is a simple but effective process isolation mechanism to ensure that secrets are not accessible from other software modules and that one module cannot disturb the memory of other modules.

Sancus 2.0 comes with openMSP430 hardware extension Verilog code for use with FPGA boards and with the open-source Icarus Verilog tool. A simple “hello, world” example module written in C demonstrates the basic structure of a software module designed to be loaded in a trusted environment. There are also more complex examples and a demonstration trusted vehicular component system. An LLVM-based compiler is used to compile software to signed modules designed to be loaded by a trusted microcontroller.

Mühlberg mentioned that there is ongoing work on creating secure paths between peripherals for secure I/O, integration with common existing hardware solutions such as ARM TrustZone or Intel SGX, formal verification, and ensuring suitability for realtime applications.

To give a feel for the system in action, Mühlberg showed a demonstration video comparing two simulated automotive controller networks with malicious code running on a node. One can see the unsecured system behave erratically when receiving invalid messages, whereas the Sancus system gracefully slows down and safely disengages.

Much has been written about the upcoming IoTpocalypse: the lack of security in critical infrastructure and general despair about the dismal state of easily exploitable embedded systems as they multiply and get connected to the internet. A project based on open-source building blocks and free-software ethos that attempts to provide a layer of integrity and deterministic behavior to microcontrollers should be lauded and considered by anyone building hardware applications where security and reliability are strong requirements.

 

Travel: Budapest, Hungary

Budapest is one of my most favorite places of all. The architecture is some of the best in the world, there’s a mix of cultures from Austria-Hungary to Turks to Magyars. Everything I ate there was delicious. Great public transit. Turkish-style baths everywhere. Fun boat tour on the Danube river, separating Buda and Pest.

Good to visit as an American, too. At immigration most foreigners were being fingerprinted and questioned for long periods of time. When they saw my US passport they just waved me through.

Only real complaint is that all written words in Budapest are in Hungarian, a language I have absolutely zero chance of ever learning or understanding. It belongs to the Uralic family along with Finnish and Estonian, containing basically no relation to any other Greek, Latin, Romance, or Slavic tongues or words. Besides the unfortunate language it’s a great place.

IMG_2823
Parliament
IMG_2883
Parliament
IMG_2856
Buda

IMG_2863IMG_2871

IMG_2837
Soviet war memorial
IMG_2885
Amazing architecture
IMG_2899
Opera house?
IMG_4461
One of my favorite restaurants – Cafe Kör
IMG_4460
Outside
IMG_4472
Inside: organ concert
IMG_4468
Sweet church
IMG_2895
Energy drink
IMG_4474 2
Man in airport approving of Budapest

Information vs. Encodings

A concept about modern computing that often confuses people is the difference between some piece of data and the encoding, or representation of that data.

Everyone knows computers use binary. They use 1s and 0s to store and manipulate information. Do they use binary numbers?

Computers can only store information as patterns of electrical switches, set in the “on” or “off” position. There is no such thing as a “binary” number, only a number that is encoded as a binary pattern. Numbers are information, and they don’t actually exist. We can write down Arabic numerals like “42”, or write it in base-2 as “101010”, but these are merely different ways of encoding the same number. It’s up to us to come up with a scheme of encoding information using whatever is available.

Humans have all used base-10 numbering systems throughout history because we have ten fingers. In Roman times people used Roman numerals, which were pretty clumsy and not especially well-suited for arithmetic or algebra. Later, Europeans switched to Arabic numerals (0-9) while keeping the Latin writing system (A-Z).

So the number 42 is still the same number whether it’s written as XLII, “forty-two”, 4️⃣2️⃣, 0x2A, etc. All represent the same number, just encoded different ways. It’s up to the person interpreting the encoding using a particular scheme to translate it from the written-down form into useful information.

This doesn’t apply to only numbers but text, audio, video, web pages, hard disks, subtitles, and anything else one may want to be able to store in some hard copy form and represent digitally. Assuming lossless encoding, a FLAC of a song is the same information as an AIFF of a song is the same information as a zipped WAV of a song. They all represent the same PCM audio data just in different formats.

This blog post is a bunch of dumb words that anyone who understands English can make some sense of, but it’s stored as a sequence of bytes using the UTF-8 encoding standard which is a way of storing Unicode glyphs as a sequence of bytes (byte = 8 bits, hence “UTF-8”). Unicode is a mapping of codepoints (numbers) to glyphs, with some fancy rules about combining glyphs and things. Unicode is not a format, there are different ways to encode the codepoints into a machine-processable format.

As far as computers are concerned you can only deal with bits, grouped into bytes. The most convenient way to store and retrieve any data from RAM or storage or over a network is a stream of bytes. If you want to represent some information in a computer, you need some encoding scheme to translate it to and from a stream of bytes. How you want to accomplish this can be entirely up to you. The information only has the meaning you choose to imbue it with.

Heroku logging to AWS Lambda

If you use heroku and AWS and want to customize your heroku application logging, you can hook Logplex up to AWS Lambda.

Background

When a heroku application emits things to stdout or stderr they get shuttled to the magical world of Logplex. The logs enter as syslog messages, containing information like facility, priority, etc. Not only logs from your application but logs from heroku’s build and deploy systems, postgresql, and other add-ons as well. Shortly after arrival these logs are dispatched to whatever sinks your heroku app has configured which can go to add-ons like PaperTrail, and also to custom log sink URLs. The sink destinations can be syslog(+TLS) or syslog-over-HTTPS using octet counting framing.

One advantage of this setup is that you can have your application emit logs with a minimum of blocking. At one point I had my application sending logs to Slack directly but this caused latency in the application any time I logged anything. By sending to Logplex on the other hand, I can process the application messages asynchronously without doing anything remotely fancy in my application. Another benefit is that you can handle your application, database, build, and deploy logs all the same unified fashion.

Using AWS API Gateway and Lambda you can set up your own Logplex sink and can do whatever you desire with the logs coming out of Logplex. This includes your application’s output as well as add-ons and heroku platform messages. You can them send them into CloudWatch Logs, or even Slack as in this example:

"""Sample handler for parsing Heroku logplex drain events (https://devcenter.heroku.com/articles/log-drains#https-drains).
Expects messages to be framed with the syslog TCP octet counting method (https://tools.ietf.org/html/rfc6587#section-3.4.1).
This is designed to be run as a Python3.6 lambda.
"""
import json
import boto3
import logging
import iso8601
import requests
from base64 import b64decode
from pyparsing import Word, Suppress, nums, Optional, Regex, pyparsing_common, alphanums
from syslog import LOG_DEBUG, LOG_WARNING, LOG_INFO, LOG_NOTICE
from collections import defaultdict
HOOK_URL = "https://" + boto3.client('kms').decrypt(CiphertextBlob=b64decode(ENCRYPTED_HOOK_URL))['Plaintext'].decode('ascii')
CHANNEL = "#alerts"
log = logging.getLogger('myapp.heroku.drain')
class Parser(object):
def __init__(self):
ints = Word(nums)
# priority
priority = Suppress("<") + ints + Suppress(">")
# version
version = ints
# timestamp
timestamp = pyparsing_common.iso8601_datetime
# hostname
hostname = Word(alphanums + "_" + "-" + ".")
# source
source = Word(alphanums + "_" + "-" + ".")
# appname
appname = Word(alphanums + "(" + ")" + "/" + "-" + "_" + ".") + Optional(Suppress("[") + ints + Suppress("]")) + Suppress("-")
# message
message = Regex(".*")
# pattern build
self.__pattern = priority + version + timestamp + hostname + source + appname + message
def parse(self, line):
parsed = self.__pattern.parseString(line)
# https://tools.ietf.org/html/rfc5424#section-6
# get priority/severity
priority = int(parsed[0])
severity = priority & 0x07
facility = priority >> 3
payload = {}
payload["priority"] = priority
payload["severity"] = severity
payload["facility"] = facility
payload["version"] = parsed[1]
payload["timestamp"] = iso8601.parse_date(parsed[2])
payload["hostname"] = parsed[3]
payload["source"] = parsed[4]
payload["appname"] = parsed[5]
payload["message"] = parsed[6]
return payload
parser = Parser()
def lambda_handler(event, context):
handle_lambda_proxy_event(event)
return {
"isBase64Encoded": False,
"statusCode": 200,
"headers": {"Content-Length": 0},
}
def handle_lambda_proxy_event(event):
body = event['body']
headers = event['headers']
# sanity-check source
assert headers['X-Forwarded-Proto'] == 'https'
assert headers['Content-Type'] == 'application/logplex-1'
# split into chunks
def get_chunk(payload: bytes):
# payload = payload.lstrip()
msg_len, syslog_msg_payload = payload.split(b' ', maxsplit=1)
if msg_len == '':
raise Exception(f"failed to parse heroku logplex payload: '{payload}'")
try:
msg_len = int(msg_len)
except Exception as ex:
raise Exception(f"failed to parse {msg_len} as int, payload: {payload}") from ex
# only grab msg_len bytes of syslog_msg
syslog_msg = syslog_msg_payload[0:msg_len]
next_payload = syslog_msg_payload[msg_len:]
yield syslog_msg.decode('utf-8')
if next_payload:
yield from get_chunk(next_payload)
# group messages by source,app
# format for slack
srcapp_msgs = defaultdict(dict)
chunk_count = 0
for chunk in get_chunk(bytes(body, 'utf-8')):
chunk_count += 1
evt = parser.parse(chunk)
if not filter_slack_msg(evt):
# skip stuff filtered out
continue
# add to group
sev = evt['severity']
group_name = f"SEV:{sev} {evt['source']} {evt['appname']}"
if sev not in srcapp_msgs[group_name]:
srcapp_msgs[group_name][sev] = list()
body = evt["message"]
srcapp_msgs[group_name][sev].append(str(evt["timestamp"]) + ': ' + evt["message"])
for group_name, sevs in srcapp_msgs.items():
for severity, lines in sevs.items():
if not lines:
continue
title = group_name
# format the syslog event as a slack message attachment
slack_att = slack_format_attachment(log_msg=None, log_rec=evt)
text = "\n" + "\n".join(lines)
slack(text=text, title=title, attachments=[slack_att], channel=channel, severity=severity)
# sanity-check number of parsed messages
assert int(headers['Logplex-Msg-Count']) == chunk_count
return ""
def slack_format_attachment(log_msg=None, log_rec=None, title=None):
"""Format as slack attachment."""
severity = int(log_rec['severity'])
# color
color = None
if severity == LOG_DEBUG:
color = "#aaaaaa"
elif severity == LOG_INFO:
color = "good"
elif severity == LOG_NOTICE:
color = "#439FE0"
elif severity == LOG_WARNING:
color = "warning"
elif severity < LOG_WARNING:
# error!
color = "danger"
attachment = {
# 'text': "`" + log_msg + "`",
# 'parse': 'none',
'author_name': title,
'color': color,
'mrkdwn_in': ['text'],
'text': log_msg,
# 'fields': [
# # {
# # 'title': "Facility",
# # 'value': log_rec["facility"],
# # 'short': True,
# # },
# # {
# # 'title': "Severity",
# # 'value': severity,
# # 'short': True,
# # },
# {
# 'title': "App",
# 'value': log_rec["appname"],
# 'short': True,
# },
# # {
# # 'title': "Source",
# # 'value': log_rec["source"],
# # 'short': True,
# # },
# {
# 'title': "Timestamp",
# 'value': str(log_rec["timestamp"]),
# 'short': True,
# }
# ]
}
return attachment
def filter_slack_msg(msg):
"""Return true if we should send to slack."""
sev = msg["severity"] # e.g. LOG_DEBUG
source = msg["source"] # e.g. 'app'
appname = msg["appname"] # e.g. 'heroku-postgres'
body = msg["message"]
if sev >= LOG_DEBUG:
return False
if body.startswith('DEBUG '):
return False
# if source == 'app' and sev > LOG_WARNING:
# return False
if appname == 'router':
return False
if appname == 'heroku-postgres' and sev >= LOG_INFO:
return False
if 'sql_error_code = 00000 LOG: checkpoint complete' in body:
# ignore checkpoint
return False
if 'sql_error_code = 00000 NOTICE: pg_stop_backup complete, all required WAL segments have been archived' in body:
# ignore checkpoint
return False
if 'sql_error_code = 00000 LOG: checkpoint starting: ' in body:
# ignore checkpoint
return False
if appname == 'logplex' and body.startswith('Error L10'):
# NN messages dropped since...
return False
return True
def slack(text=None, title=None, attachments=[], icon=None, channel='#alerts', severity=LOG_WARNING):
if not attachments:
return
# emoji icon
icon = 'mega'
if severity == LOG_DEBUG:
icon = 'information_source'
elif severity == LOG_INFO:
icon = 'information_desk_person'
elif severity == LOG_NOTICE:
icon = 'scroll'
elif severity == LOG_WARNING:
icon = 'warning'
elif severity < LOG_WARNING:
# error!
icon = 'boom'
message = {
"username": title,
"channel": channel,
"icon_emoji": f":{icon}:",
"attachments": attachments,
"text": text,
}
print(message)
slack_raw(message)
def slack_raw(payload):
response = requests.post(
HOOK_URL, data=json.dumps(payload),
headers={'Content-Type': 'application/json'}
)
if response.status_code != 200:
raise ValueError(
'Request to slack returned an error %s, the response is:\n%s'
% (response.status_code, response.text)
)

 

Drawbacks

There is one major deficiency in this system that is worth noting: there is no way for your application to alter the log message’s syslog fields. So even if your application logger knows a particular message is debug, or warn, or error, it all comes across as severity level 6 (info). Logs from other components such as postgresql preserve their log severities but your application is a second-class citizen and there is no mechanism to send actual syslog messages to Logplex even though add-ons and internal heroku machinery clearly does. I filed a ticket about this and complained at length and they told me they have no plans to allow users to send syslog-formatted messages to Logplex, and everyone is stuck with only stdout/stderr. This means if you wish to treat messages of differing severities differently in your Logplex sink you can’t, at least not with the existing out-of-band syslog data that your sink receives. As far as the sink can tell all of your application debug logs and error logs all look the same, which is frankly an impossible situation when it comes to logging. Hopefully they fix this some day.

Internet Engineering Task Force Meeting 99 – Dispatches

The Internet Engineering Task Force (IETF) is an organization dedicated to stewardship of an ever-expanding body of technical standards to facilitate interoperation of machines and software connected to the internet. Pretty much everything you can do on the internet, including the functioning of the internet itself, is governed by the IETF “Request For Comments” documents known as RFCs. Some standards defined in the RFCs include TCP/IP (internet), SMTP (email), IRC (chat), XMPP (jabber), emergency telephone call information, live video streaming and multitudes more.

The Internet Society facilitates much of the IETF’s work by providing administrative and organizational resources. There is no formal membership roster or special recognition given to governments or corporations. While most of the roughly 1,200 IETF attendees (except for your correspondent) were sent on trips with all expenses paid by their employer or through the IETF fellowship program, there is a strong understanding that everyone there is representing themselves in technical matters. They are all expected to only state opinions they are personally willing to stand behind. The criteria for acceptance of moving IETF drafts forward are “rough consensus and running code,” though the “running code” part is less of a thing these days than it used to be. To get involved in the process all you have to do is join a working group (WG) mailing list. Anyone can attend of the tri-annual meetings, which are usually held in North America, Europe, and Asia.

Everything at the meetings including WG notes, audience questions, and meeting materials are recorded and made publicly available along with a live video stream with remote participation.

. One of this year’s meetings was held in Prague, a frequent location for the Europe area. It was held at the Prague Hilton, and as part of the event contract the IETF replaced the hotel’s network with its own, setting up their BGP ASN and a multitude of wireless networks with 802.1X, IPv6-only and NAT64 experimental options, and a DHCP server handing out globally routable addresses with no firewall. As one should expect, the IETF doesn’t screw around when it comes to the meeting network.

The work of the IETF is divided into subject “areas” which are made up of many working groups related to the area. The areas are the internet, operational issues and network management, routing, security, transport, applications and real-time, and general for more meta work.

Each working group in an area has a well-defined charter describing its purpose, and background materials to help frame the discussion. The work done by a WG almost all happens on its dedicated mailing list, with updates and discussion that is much easier to do face-to-face taking place at the meetings in person or via remote video conference.

In addition to the WGs, there are BoF sessions. A BoF (pronounced boff) is a “birds of a feather” group where people who are interested in a topic can come discuss ideas and gauge interest and see if there is IETF-related work to be done on the topic. If so, a working group may emerge from the BoF.

And finally there are research groups which are set up for long-term collaboration on research topics. They have a less focused charter and pursue and share research about a particular topic instead of working towards explicit RFC publication deliverables.

The RG’s mailing lists are great places to learn about new developments and work being done by academics and in-the-field engineers in subjects of interest. Just this morning I got an email on the GAIA list from the president of the Internet Society of Togo stating that they are experiencing internet shutdowns in the country today.

Attending the IETF meeting in person I was able to see the working groups, research groups and a BoF in action. Allow me to share my first impressions and experiences as a total clueless newcomer.

netvc

 

The Internet Video Codec working group is attempting to subjectively and objectively test and compare several candidate video codecs for use on the internet. netvc is a follow-on to the remarkably successful work the IETF codec WG did on audio codecs, in particular the royalty-free, high-quality and efficient Opus codec.

The topic of non-proprietary codecs is near and dear to my heart and more important than most people realize. Right now if you want to put a video on the web and have it work in all browsers you have but one option: the h.264 video codec, licensed by the MPEG Licensing Association patent cartel. This codec is covered by many patents and is not free in any sense of the word. Mozilla and Google have support for more open and less patent-encumbered video codecs (Ogg Theora, VP8) in their browsers, with Google going far far out of their way to purchase the VP8 codec and release all patent claims in the hope of having an unrestricted and open codec for everyone on the internet to use without having to pay royalties or fear of getting sued. This didn’t work out quite as planned for two reasons, one being that Google wouldn’t indemnify codec users (and couldn’t reasonably do so under extremely perilous and burdensome US patent laws), and the other reason being that Microsoft and Apple refused to include support for this codec in their browsers. Not that it would have been a great amount of effort, as the code is freely available with open source implementations. Having a video format that would only work in some browsers doesn’t really cut it for content publishers so everyone is forced to use h.264 instead. Also by some unrelated weird coincidence Microsoft and Apple happen to belong to the MPEG-LA and get a share of royalties from encoder licenses.

This is a rather long-winded way of saying the standardization of a (relatively speaking) patent-unencumbered free codec is actually quite crucial in keeping basic modern internet functionality out of the greedy hands of a small number of corporations. This is the kind of hard work and battle that must be constantly fought to keep the internet as free and open to everyone that organizations like the IETF and Internet Society are always engaged in.

As a point of comparison between VP8 and the result of the netvc video codec selection, users will still unfortunately be in the exact same position with regards to patient indemnification. The IETF cannot guarantee to defend all users from patent trolls. Despite Google’s promotion of VP8/VP9 as an open standard for internet video many people have treated them as proprietary codecs and desired a non-proprietary alternative.

The netvc working group is evaluating the codecs AV1, VP9 and Thor. Part of the work of the group has been to establish requirements for comparing the codecs on metrics such as high- and low-latency performance (offline vs live encoding), decoder complexity (to optimize CPU/power consumption and hardware acceleration), perceptual quality, error resilience, and Weissman Score (just kidding about that last one).

The general requirements for the internet video codec are that it should be suitable for video calls, broadcast media, conferencing, telepresence, teleoperation, screencasting, and video storage. They are basically aiming to equal or best h.265 (successor to h.264) as far as quality and complexity.

There are double-blind tests that anyone can participate in to subjectively judge video and frame encoding quality in a split view. They test one quantization parameter at a time in both high- and low-latency modes. The gentleman presenting on subjective testing claimed that Mozilla has a 4k projector in the break room they make the interns do tests on for cookies, though I wasn’t super sure if he was serious or not. Approximately 12 viewers are required for each test to be statistically significant. Some of the test corpora include Minecraft Twitch videos, “netflix crosswalk” and “netflix tunnelflag”.

The codecs being compared are works in progress; AV1 has gained about 20% compression over the past year most of that in the past three months, though with about a 1000% increase in complexity.

AV1 complexity is best vs Thor and VP9. Thor and VP9 have similar profiles for complexity/speed tradeoff for mixed content. Thor measured better than VP9 for video conferences but not quite as good AV1. They believe it’s possible to get Thor to perform roughly as well as AV1 but with a fraction of the tools and added complexity.

Error resiliency was discussed quite a bit. Since video is open streamed at someone and decoded in near-realtime, ability to gracefully recover from packet loss is an important consideration. This is a complex problem involving careful trade-offs because a packet does not represent a frame that can be easily dropped. Most of the time the packets contain backwards-looking prediction information that is computing estimated motion vectors from previous frames and against reference frames that the decoder may or may not have received or decoded successfully. There is a certain amount of redundant information that can be part of the packetized payload but this is a tradeoff between resiliency and amount of video information that can be packed into a certain bitrate. VP9 can reference frame dependencies implicitly or explicitly (with RTP picture ID mappings); there’s no way to know from an RTP header if a dependent frame is available without parsing actual RTP packets. AV1 explicitly signals and codes frame IDs in the codec payload, there is a proposal to move to motion predictions from the most recent reference frame.

As far as color information in AV1, a technique is being adopted from Daala (a Xiph codec converging with Thor) called CfL – Chrome from Luma. There is a correlation between luma (brightness) and chroma (color) that can be used to predict chroma coefficients directly. It was reported that doing this in the frequency domain sucked, and they are currently proposing to do this in the spatial domain instead.

A notable thing about the netvc work has been the virtuous cycle of development it has brought. Simultaneous open development of AV1, Thor, VP9 and previous Daala with non-proprietary code and openly published test results has highlighted the ease and power of open-source collaborative development. Each project takes ideas from the others, improves upon them, and the improvements are fed back into the original project, in a cyclical fashion, with the work and results immediately available to everyone.

t2trg

Overheard at IETF99: “The ‘S’ in IoT stands for ‘Security’”.

The Thing To Thing Research Group (t2trg) highlighted security and interoperability issues with Internet of Things (IoT) devices.

Will IoT networks be friendly to each other? Some concerns exist about interference between vendors in terms of wireless spectrum usage, IP networks (imagine buying devices from different vendors that both want to be DHCP servers), multicast issues, sharing resources like an external IP address. “Every device vendor sees the network they operate on as a wide, big, empty road on which they are the only driver.”

Like UNIX, IoT is awesome because there are so many standards to choose from! There are different areas that different bodies focus on, but with a lot of overlap between schema.org, W3C, LwM2M, ISPO semantics and more.

Data interoperability is an issue too. Some data models have license terms that are opaque and hard to find out. I would suggest that any vendor trying to license their data models should just… not, but that is just my opinion.

A long-standing question has been service and resource discovery on the network. Imagine if you have a smoke detector from one vendor that wants to flash lights or play an alarm on speakers from other vendors. Multicast DNS is pretty accepted for this but it is fairly limited semantically. We really could use a standard for machine-readable resource enumeration and metadata. Part of the problem here is the difficulty of agreeing on a shared definition of what “metadata” is (just ask the NSA); it took the IETF four attempts to define metadata for security management. There are privacy concerns about announcing what resources a network has. You probably don’t want your pacemaker advertising control capabilities to anyone on the network. Some common infrastructure would be helpful, like a centralized IoT identifier registry. Right now most of the work the RG is doing is stored on repositories and wikis on GitHub.

There is an as-yet unsolved problem: if you buy an internet-connected device, how do you bootstrap security identifiers and credentials for your network and cloud services? How do you connect something to your wireless network that has no screen, or keyboard?

Research and a reference implementation were also presented about one solution for authorizing network access for IoT devices. The proposal, called EAP-NOOB (really), utilizes out-of-band (OOB) communication for network authorization and user account setup. Examples they gave were a smart TV that displays a QR code the user scans with their phone, or a camera taking a picture of a QR code presented on a phone. They suggested other OOB mechanisms such as an audio cable or NFC NDEF message.

perc

I attended the Privacy-Enhanced RTP conferencing WG.

The hard problem that the perc group is trying to solve is how to enable centralized Secure Real Time Protocol (SRTP) conferencing where the central device distributing the media is not required to be trusted with the keys to decrypt the participants’ media.

At the meeting they discussed obscure (to me) technical details regarding best ways to maintain and re-key Secure RTP communications for conferencing involving double-encrypting tunnel components and allowing RTP packet repair by media distributors.

There was an interesting presentation about RED – redundant encoding. This was in a similar vein to the netvc error resilience discussion, evaluating tradeoffs between less bandwidth efficiency and better handling of dropped packets. In the RED scheme, each RTP packet contains an alternative (low-quality) version of the previous frame for repair purposes, mostly for audio. The main idea being that if packet loss is detected in a poor quality conference, you could reduce some of the bandwidth used for video and instead allocate that to audio packet repair so that at least audio quality suffers less. Double-audio packets could even be handled by media distributors instead of the streaming source endpoint, which would be a very nice feature for CDNs, distributed networks and robust media servers.

Some other topics about TLS-IDs in SDP and FlexFEC were discussed but I had no idea what they were talking about.

tsvarea

The findings of a paper on non-volatile main memory (NVMM) by NEC Labs Europe were presented at the Transport Area Open Meeting.

NVMM is a far-along technology coming to mobile devices soon. Computers going back many decades have used volatile main memory, meaning the contents of RAM are lost when the power is turn off. There exists a major practical and abstract barrier between main memory (RAM) and persistent storage (SSD, disks) because of the differences in volatility, speed and capacity. With NVMM, main memory can be used as persistent storage. Of course it’s not quite that simple; NVMM costs are higher than RAM and much higher than mass storage devices, and not yet faster than typical DRAM. But it is an area with potential applications for accelerating certain tasks.

The researchers investigated the implications for networking, focusing on the use case of downloading a file over a network.

Right now when your computer is downloading a file the data follows a path from the Network Interface Card (NIC) to DRAM (using DMA I believe), then is read from DRAM by the OS networking stack, a read() by the application doing the downloading, then a write() to the storage stack, which is buffered into DRAM and then flushed to disk. This process was measured to have a latency of about 2000µs. By simply replacing the last bit with a copy from DRAM to NVMM, the latency was reduced to about 40µs, showing that the disk flush was extremely significant, as well as the cache misses involved due to the fact that the area of DRAM being read from was an ever-advancing pointer .

Part of their solution was to maintain a static ring buffer of packets and a small set of metadata entries containing offset/length indexing information of the packets in the buffer. This helped prevent cache misses as the region of memory for the packets remained fixed. The other change was to DMA packets to L3 cache instead of main memory, and only if packets needed to be stored was the cache flushed to DIMM. They said a 10-88% increase in throughput was obtained and a 9-46% reduction in latency, and the improvements scaled linearly with cores.

The researcher suggested that similar types of optimizations which change assumptions about the persistence of main memory storage can pay large dividends and that there are likely many such areas for taking advantage of NVMM capabilities. Exciting!

ideas

I attended a BoF session for IDentity-EnAbled networks.

From the very cursory glance I gave the Bof it superficially resembled a topic I’ve long been interested in: the concept of a universal mechanism for identity on the internet. I’ve long thought it would be a massive step forward of internet services could make a basic assumption about the requestor, such as every request containing a public key. Say every request made to a website contained such a public key; you wouldn’t need to register a separate username and password at every site you visit. You could have one universal identity or generate new ones on the fly as desired, it would strongly prove that the requestor is in possession of an unfalsifiable key but also provide pure anonymity at the same time. All data could be end-to-end encrypted and stored securely such that only the owner of that identity could read it, and so much more, all with a very simple change. I even wrote a ton of code for a project for a new application layer based on this concept about ten years ago but I got a little too carried away on the scope of it and there was no possible way I was going to do it by myself.

So I was excited that maybe there would be efforts towards standardizing this simple but powerful idea at the IETF. Part of the agenda was a system that even had the same name as my project! Imagine my disappointment when I learned that their plans were impenetrable soups of acronyms and incredibly complex and confusing academic-speak.

Much of the blame lies with me for not reading through the materials ahead of time, to be sure. The IETF meetings assume everyone is up to speed on all the drafts and documents and mailing list traffic. As a newcomer trying to sample many different projects I simply didn’t have hours and hours to read over all the drafts before going to the different meetings. However at all the other sessions I attended I mostly got the gist of everything even if I was not intimately familiar with every detail and issue of conflict at the WG. The IDEAS session was very different.

The session discussed the definition of an identity-identifier split, defining an identifier as something similar to but not quite an location identifier, which could be a “valid but often non-routable v4/v6 address” and could “be truncated but managed within a domain of use”. An identity belonged to a machine, not a person. A concept of HIT (host identity tag) for the HIP (host identity protocol?) was a ‘flat’ namespace of identity tags which were v6 address looking things. They wanted to separate identifiers from locations, as “IP addresses have overloaded semantics going back to 1993”.

While I should mention again I didn’t do the reading before class, I do have a considerable background in related topics and I didn’t understand the point of their discussion at all and everything seemed mind-bogglingly complex and there were dozens of acronyms tossed around that I’d never heard of. Their solution required complex service topologies with lots of arrows and diagrams, considerable infrastructure, and even a design for HIP that “requires changes in the IP stack.”

I feel how the co-chairs look right now

The ideas presented at IDEAS were so dense, complex and impenetrable that I simply can’t imagine any kind of widespread adoption of whatever it is they were pitching. As someone who designs and builds complex systems for software services I have a bad reaction to obviously over-engineered systems and generally prefer simpler and easier to understand, if less powerful solutions. The technical sophistication of a system must be balanced with actual human concerns about ease of adoption, ability to communicate the design in a clear and concise way to other humans, and make the benefits and trade-offs clear so other humans can make informed choices about your system. This was the only session I attended that felt utterly doomed and depressing and I couldn’t sit through the end. In fact it bothered me so much that I did something I was not supposed to: got up and asked a question without reading all of the materials ahead of time. I paid to be here, might as well get my money’s worth.

“I have a stupid question…” I said to the presenter.

Speaker: “There are no stupid –”

Me: “This all seems incredibly complicated and dense and difficult to grasp. Why not use a public key as an identifier?”

Speaker: “Which format of public key and what algorithm? (is this ID_KEY_ID??)” [language from official meeting notes]

Me: “OpenSSH key format.”

Speaker: CLEARLY you did NOT read the drafts and YEARS of hard ACADEMIC RESEARCH and [your question is stupid].

GRIDS PoC

cellar

The Codec Encoding for LossLess Archiving and Realtime transmission WG was full of great progress and news. Its charter is related in a fashion to the internet video codec WG in that both are standardizing free and open formats for multimedia in an effort to not get the entire world stuck in a trap of being burdened with de facto standards of proprietary and royalty-encumbered audio and video formats. cellar is focused on lossless archiving of multimedia, as in the United States’ Library of Congress as one example. If digital multimedia is to survive many years of technology changes and new formats it must be encoded in a well-defined standard and not lose any quality.

From the charter:

“The preservation of audiovisual materials faces challenges from technological obsolescence, analog media deterioration, and the use of proprietary formats that lack formal open standards. While obsolescence and material degradation are widely addressed, the standardization of open, transparent, self-descriptive, lossless formats remains an important mission to be undertaken by the open source community.”

In a nutshell (or Matroshka), the group is defining normative guidelines for an official format to be used for representing lossless audio and video data and containing them. The choice has been made of Matroshka (.MKV) for the container, FFV1 for video, and FLAC for audio. FFV1 is already specified for archival use by the US Library of Congress, and FLAC is widely used by audiophile pirates.

Issues discussed were problems with the existing specifications vs. the reference encoder, which has some known issues like integer overflows and incorrect colors, which are supported by the reference decoder. The next milestone and format version is removing these documented exceptions and “documenting reality” instead.

The illustrious open-source media codec library ffmpeg supports Matroshka binding V_FFV1 CodecIDs without a compatibility layer but doesn’t write out the codec ID by default in ffmpeg to preserve compatibility with older versions of ffmpeg. They are ready for the future with a native FFV1 codec ID.

The FFV1 coder description is described except for the description of the single-pixel Pixel() function. Much is already written in plain english but a normative C-like description should be given.

FFV1 v4 should support more pixel formats and add native metadata, not relying on the container (MKV) for metadata. FFV1 can transport its own metadata as well.

A description of Matroshka was given live via remote video feed (naturally) along with some historical context. It was started in 2012 to store live TV captures because existing containers were unsuitable for them. It was forked into its own project due to disagreements with the community. It borrowed ideas from AVI, Ogg, XML and semantic web ideas. Later on the codecs H264, H265, VP8, VP9, AC3, DTS, and Opus came. It was adopted by Google and Mozilla for their standardized “WebM” format, designed to be a standard for free and open multimedia format for the web, consisting of VP8 or VP9 for video and Vorbis or Opus for audio. It is used and supported today but not well-supported by Apple and Internet Explorer due to evilness and greed (see netvc above).

Matroshka/WebM is widely supported by open source software players, Windows 10, blueray, smart TVs, Netflix, Nintendo, Youtube. Recently 360° video and HDR metadata support was added.

Question: “What is the plan for documenting WebM? Will that be a part of the cellar specification?”

Speaker: “WebM is basically the Matroshka specification online, WebM doesn’t have anything not in Matroshka. Matroshka all applies to WebM and the spec says if it applies. They are the same format. I wish Google would help us work on this spec. Mozilla and Google people are on the mailing list but aren’t helping with the spec.”

The cellar working group’s IETF documents are generated from Markdown and EBML-defined XML files. XML semantics defining EBML can used to generate code, including all parts of WebM. The Matroshka v3 spec was submitted July 2017, and in September the v4 spec is due to be submitted. The specification is a huge task comprising 243 element, 33 of which are deprecated. There are seven pending pull requests, text clarifications and codec definitions, and 22 known issues remain, mostly text clarifications along with some format additions, formatting changes and codec definitions.

saag

The Security Area Advisory Group met to listen to some invited talks on security-related topics relevant to the work of the IETF.

A long and fascinating talk (slides; recommended reading) was given by Kenny Paterson about post-quantum cryptography. PQC is one of those concerns that (as far as is publicly known) is not an immediate problem but something people should be thinking about and planning for well before the time it actually becomes a crisis, if indeed quantum computing ever reaches a point where it can break most classical encryption schemes currently in use today. There’s even an obscure film about this scenario called The Traveling Salesman.

For context, the timeline of a weakness of the hash algorithm SHA-1 was given:

The point being that there were many years between the discovery of a theoretical weakness and an actual successful attack, with a standards organization (NIST) trying to promote an improved version, and resistance by the complacent commercial certificate authorities. That is until they had a change to replace their certificates with SHA-2 after mass revocations due to the OpenSSL “heartbleed” vulnerability.

So a sane route might be to continue research today to potentially protect against future quantum computing attacks on classical cryptographic methods or at the very least explore and document interesting alternatives to prime factorization and elliptic curve crypto. Some of these include lattice-based, code-based, non-linear, and ECC-isogenies and I haven’t the foggiest notion what those are.

Is significant quantum computing on the horizon? People have been saying QC is “a decade away”, for several decades now. Also the quote “In terms of fundamental physics …. we’re pretty close to what we need. There’s just tonnes of engineering work…” was mentioned, to the laughter of the engineers in the audience. The speaker said quantum physics laws have been verified to around ten decimal places, which isn’t all that great. Some relevant questions are: “is quantum computing solid against advances in physics?” versus “is public-key crypto vulnerable to algorithmic advances in conventional algorithms for factoring, discrete logs, etc.”

There exists a company D-Wave which produces fantastically machines kept at near-zero temperatures for “quantum annealing” with some notable customers. Quantum annealing is a quantum version of simulated annealing, a common optimization technique in which the “energy” of a system decreases and settles on more local minima/maxima as time goes on.

There have been publicized advances in quantum-key distribution, such as a recent experiment using QKD over long distances by China with mainstream media headlines like “unhackable encryption” and “the future of security”. It should go without saying that such reports are dubious. For one, QKD isn’t really distribution – it expands existing keys. This can already be done with key derivation functions (e.g. PBKDF2) with classic cryptography. The problem with QKD is that it doesn’t work for any great length, there must be signal boosting components which decode and then re-encode the transmission stream to send it over long distances, preventing end-to-end encryption over distances. The UK’s NCSC (formerly GCHQ) took the unusual step of publishing a white-paper bagging on QKD and describing its infeasibility.

The IETF is developing two drafts for hash-based signatures which are considered mature. Other PKC schemes are being researched but not anywhere near standardization. The suggestion was made that IETF should not lead the standardization effort for PKC but instead follow the lead of the US NIST, and for the present the IETF should care not to bake in any algorithms yet, such as too-small maximum field sizes.

Ways Forward?

Participant: “current estimates for key sizes are going to be an order of magnitude larger… so like 50k-bit key sizes. If you have a protocol like UDP where everything fits in one packet, you’re going to have a bad time.”

Participant: “I do have a PhD in nuclear physics and I don’t think QKD is going to work because the engineering parameters are too hard. .. We need a deployment plan for this now, before we have any crypto.”

Another (brief) talk was given on the p≡p (pretty easy privacy) project, a software engineering effort to improve interoperability of privacy and cryptography between instant messaging and email applications, in the vein of S/MIME and OpenPGP. The speaker said that the IETF could help with MIME-based message formats, key synchronization, base protocol mapping for email, Jabber, URI schemes for missing message addressing such as GNUnet, signal and so on. They said they had a library available with adaptors for Java, C#, Python, Obj-C, Swift and more, with actual software written for Android, EnigmaMail, Outlook, iOS and Email/p≡p. It sounded like a great project and opportunity for IETF standardization and real engineering effort to come together in a standards-based effort to increase privacy, trust and interoperability all at the same time.

Conclusion:

All in all the meeting was a great way to not only learn about lots of intricacies and interesting technical problems that smart people were trying to solve, but to see the process of creating and implementing standards crucial to the openness and freedom of the internet. This work is something that so many people take for granted and they don’t appreciate the constant ongoing difficult effort that thousands of people do to prevent corporations or governments from monopolizing the function and operation of the internet.

The IETF is distinct from other standards bodies such as the government-influenced ITU or the vendor/carrier-driven 3GPP group for wireless network standards. Without work being done in the open and distributed through a community of volunteers, nefarious actors can and do try to dictate their proprietary solutions for technology, often for their own financial benefit and not necessarily in the interest of the greater good.

Nobody forces the IETF standards on anyone; they are implemented voluntarily by engineers working on internet-related technology to promote interoperability and ensure the underlying protocols, transports, networks and formats remain free and open. Everyone chooses to implement the IETF standards because of Metcalf’s Law: the value of a telecommunications network is proportional to the square of the number of connected users of the system.

Recognition and support should be given to the work the IETF does to promote freedom and privacy around the world, and I encourage anyone to get involved and join the mailing lists and discussions of any working groups related to their interests.

Post-meeting feeding frenzy

Cross-posted to sfbayisoc.org

IoT Security Through Open Certification

(Cross-posted from SF ISOC blog)

IoT Security Through Open Certification

The more jaded nerds who’ve been around the block a few times here in San Francisco have an understandably dismissive attitude towards the use and abuse of technological buzzwords, of which “IoT” is a contemporary offender. In one sense they’re correct in that what we’re talking about are embedded systems connected to the internet, Big Deal. But remind them that it’s a bunch of embedded systems connected to the internet in the context of security, and the salient point is sharply made. They quickly turn from dismissive to despondent, knowing where this is all likely headed.

Obligatory Scary References and Predictions

Where is it headed? You don’t have to turn to prognostication to get a glimpse of the consequences of the Earth being flooded with sloppily-developed firmware. In case you missed it, in September and October of 2016 the Mirai botnet, thousands of embedded devices comprising 36 depressingly-poorly-secured IoT products shipping with default usernames and passwords were press-ganged into “multiple major DDoS attacks in DNS services of [the] DNS service provider Dyn […] using Mirai malware installed on a large number of IoT devices, resulting in the inaccessibility of several high profile websites such as GitHub, Twitter, Reddit, Netflix, Airbnb and many others” (https://en.wikipedia.org/wiki/Mirai_(malware)). At volumes of 620-1024 Gbps, these attacks were extremely consequential and disruptive, essentially breaking the internet for many users for the better part of a day.

This attack represented the most low-hanging fruit possible; default usernames and passwords, internet-addressable devices. The sophistication required was likely minimal.

Even more recently someone set up ZMap to find raspberry pis with SSH on and the default username and password, and created a worm capable of infecting millions of hosts that probably took the author an afternoon to make.

As the number of these sorts of devices proliferate and attacks increase in sophistication, we can expect a corresponding increase in bad days for network admins, not to mention the hapless end user. The FBI in 2015 felt the need to issue a PSA to this effect: “The FBI is warning companies and the general public to be aware of IoT vulnerabilities cybercriminals could exploit.”

The danger is well-known and publicized and not worth belaboring for too long. The real question is of course: what can we do about it?

Incentives and Obstacles

The reason that many IoT products have poor security is not due to a failure of morals, bad upbringing, or stupidity, but a reasonable economic calculation on the part of the manufacturer. They are concerned primarily with the time to market. Taking extra time to design and build properly and test their code only adds delay, for which they see no fungible benefit. These products are made by thousands of large and small manufacturers and pieced together from various developers and engineers around the world, a top-down regulatory approach is impractical. There are simply too many moving parts, countries, agencies, software libraries and stacks, for effective regulations to keep pace with this fast-moving target. So what’s to be done?

In the opinion of people smarter than me, what’s needed is an open certification for things connected to the internet asserting a minimum level of security. It doesn’t need to be ultra-rigorous to be of benefit, at least at the basic level. A simple “this device is not almost certainly going to get taken over and wreak havoc” stamp would be a great first step, one that many manufacturers are not passing muster on presently.

Why a certification?

A certification process can be designed collaboratively and openly, can be implemented by anyone, doesn’t require action from policymakers, can have different levels of rigor, and most importantly provides a market-based incentive to manufacturers to not make obvious, common blunders. The result can only be greater security and stability for pretty much the entire internet-connected planet. As a user of the internet I have a personal interest in not having everything susceptible to hacks and being used to take down internet infrastructure.

It’s the opinion of respected security professionals that this is a positive and necessary measure.

There would be incentive to manufacturers to conform to the certification; consumers and institutions should prefer to purchase conforming devices vs. similar devices that haven’t been vetted. Consider a government or corporation procurement policy that mandates that conforming devices be preferred or required.

This is not a novel idea, there are in fact a small number of company-sponsored certifications already but as far as I can tell they are proprietary and run by a single company. The most promising proposal comes from the Online Trust Alliance initiative from the Internet Society. They define a set of best practices for securing IoT devices and also take into consideration notifications and privacy. Their IoT Trust Framework provides a solid assurance that a device is trustworthy to deploy, at least more than any random off-the-shelf thing.

Other Options

Certification is not the only option for securing Things and embedded devices. Governmental policy is another possibility, though necessarily limited in its jurisdiction, scope, and ability to keep up with new developments in a rapidly-changing highly technical field. Also I don’t get to make policy, but I can help make a certification. As an example of useful legislation Dan Greer suggests making liability contingent on the openness of the firmware; if you use closed-source, proprietary systems then you are more legally liable for damage caused than if you used open-source software. This is both practical and reasonable, as open-source code can be audited and improved by the community, particularly if you go out of business but your devices remain. He has many more such intelligent suggestions that he lays out in his 2014 BlackHat keynote which I highly recommend watching. I also thought highly of his suggestion that devices should either be remotely-updateable (with signed updates of course) to patch flaws in the field, or they should “expire” and stop being connected to the internet after some period of time, say five years. Having insecure devices on the internet is one thing, having un-patchable systems that stay around forever is quite another. This could easily be a component of certification.

Another more extreme approach that as far as I’m aware was not predicted, is that some people such as the hacker “The Janit0r” have taken it upon themselves to release worms using similar vectors as the Mirai botnet to take over insecure IoT devices and then either brick them or firewall them so that they can’t be used maliciously. The Janit0r claims he has bricked over two million insecure devices so far, so that they can’t be press-ganged into evil servitude. The similar Hajime worm has no DDoS capability and instead blocks ports to lock down the device:

From https://www.symantec.com/connect/blogs/hajime-worm-battles-mirai-control-internet-things:

“There are some features that are noticeably missing from Hajime. It currently doesn’t have any distributed denial of service (DDoS) capabilities or any attacking code except for the propagation module. Instead, it fetches a statement from its controller and displays it on the terminal approximately every 10 minutes. The current message is:

Just a white hat, securing some systems.

Important messages will be signed like this!

Hajime Author.

Contact CLOSED

Stay sharp!

[…]

To the author’s credit, once the worm is installed it does improve the security of the device. It blocks access to ports 23, 7547, 5555, and 5358, which are all ports hosting services known to be exploitable on many IoT devices. Mirai is known to target some of these ports.”

Community and Governance

Another reason for optimism is the response from assorted institutions, individuals and corporations. AWS should be praised for absolutely requiring proper (mutual TLS) authentication for anyone using their IoT platform. On June 8th, 2017 the US NTIA put out a RFC specifically about hardening IoT devices and preventing botnets. The San Francisco Bay Area Internet Society has a new IoT Working Group to promote security and best practices for development, which I’m happy to be leading. If this is a topic of interest to you, there are plenty of communities of people willing to work together to make the coming flood of Things a positive transition instead of an internet minefield.

 

 

Mischa’s {Fun,Lucrative} Project Ideas

You are free to steal these ideas for yourself or work with me on them. If you make a million bucks please buy me a burrito.

Table of Contents

1 Next-generation live video streaming system

  • Use WebRTC
    • WebRTC is a new standard defining peer-to-peer (usually) live video/audio streaming. It can be easily used in web browsers with JavaScript. It is brand new and not fully deployed yet (supported in all browsers except Safari, with Safari support coming soon).
    • It is a way to do streaming live video without Flash or HLS, both of which suck.
  • Janus
    • Janus is a two-faced server written in C that speaks RTP and RTSP on one side, and WebRTC on the other. It acts like a WebRTC peer and can be used to turn WebRTC into a client/server model, or facilitate client/client communications. It is an awesome project and will enable all kinds of cool, standards-based, real-time video and audio communication in web browsers and other things.
  • Your own peer-to-peer Skype replacement website
    • With Janus and a couple hundred lines of JavaScript you could make a peer-to-peer video calling web page. It’s really not that hard. Why not make something to replace the piece of shit that is Skype. It’ll be insanely popular and you barely need any infrastructure.
  • References

2 Economic data overlay for Google Earth

  • I’ve always thought it’d be really cool to have a geospatial visualization of different economic measurements like GDP, PPP, happiness, Gini coeffecient, core and non-core inflation, unemployment, corruption, etc. This would give anyone in the world the ability to visualize how different countries (and regions?) stack up to each other in an intuitive way. Graphs and data are not accessible to most people, but visually seeing their country be obviously shittier compared to neighboring countries could help increase demand for measures to improve the quality of life of their citizenry.
  • Maps and visualizations can convey information and affect people a lot more than articles and numbers can.
  • Trick is finding good, comparable sources of data. Probably the CIA world factbook and Shadow Govt Stats would be good places to start.
  • References

3 IOT security certification biz

4 AWS consulting biz

  • AWS is awesome and can save companies gazillions of dollars on capex, datacenter, personnel, development and operations costs. If you aren’t using AWS you’re an idiot. Really. You just don’t know it yet.
  • Doing AWS The Right Way isn’t hard but requires some experience or just reading how to do it The Right Way.
  • References

5 Resell AWS functionality (rekognition, polly)

6 COBOL modernization biz

  • So many businesses have important applications that are built on old systems like COBOL and JCL.
  • They are desperate to modernize these codebases and not be reliant on impossible-to-replace hardware and systems that nobody understands and cannot hire anyone to do.
  • COBOL is incredibly unsexy-sounding and most young people have never heard of it and want to build apps or screw around with JavaScript.
  • Everyone who is qualified to do this is dead or retired.
  • Companies and governments are unable to hire anyone to fix their crap.
  • COBOL is incredibly easy to read. Probably hard to write, but you wouldn’t need to write any.
  • Porting legacy applications to modern systems could be extremely fun and lucrative.

7 Reality capture * VR

8 Hardware projectM

  • I maintain a nifty music visualizer project called projectM. It is an open-source re-implementation of the venerable WinAmp Milkdrop visualizer.
  • It needs some software work to port it to be OpenGL-ES compatible.
  • Once it works with GLES you could easily make an embedded linux system (probably a Rasberry Pi or something more beefy) that would have audio in and HDMI out.
  • References

9 dancar

Drug War, Still Going For Some Reason

Attorney General Jeff Sessions believes we aren’t tough enough on drugs, and we need to lock people up longer to reduce crimes related to drugs. It’s an interesting theory, one that we’ve implemented at a policy level for quite a few decades. It seems that rarely do politicians or journalists ask the simple question: has it worked so far?

We’ve been issuing longer and longer jail and prison sentences for drug-related crimes, especially notably for the Schedule I “Devil’s Lettuce” scourge of violent hemp addicts inflicting untold misery on the population.

12898354_1077123692308467_5359530434043148041_o

One could point out some flaws in the logic behind the theory that locking people up is good for society, reduces violence and drug usage, and is a deterrent to would-be tokers and dealers. Putting people behind bars with often violent criminals for years probably doesn’t make them better people. Keeping them away from their children probably doesn’t improve society. Incarcerating them probably isn’t great from a fiscal standpoint. Proscribing personal usage of certain drugs on a demonstrably arbitrary and non-medical, non-biological, non-psychological basis really seems to violate the reasonable concept of not prosecuting victimless crimes. The fact that addiction has been shown to be a disease, greatly predicted by genes in fact, and not a moral failing. The logic behind the idea that drugs can ruin lives, therefore we should lock them up with criminals in a cage for years away from their families, give them felony charges, and make it incredibly difficult to find gainful employment afterwards doesn’t seem to be a great life-enhancer either. The fact that ethanol alcohol, an incredibly toxic, dangerous, addictive, inhibition-destroying, violence-inducting drug and poison to all living tissue is readily available on most blocks in the country for unlimited purchase while marjiuana is classified as Schedule I, unavailable for research purposes without an extremely cumbersome DEA license, legislated as having no conceivable medical usage, and worthy of felony charges. By the federal government of course, not in the many states which currently permit medical and recreational usage.

One could go on at length about the contradictions in logic of this drug enforcement policy theory. It doesn’t seem to convince some people. After all it’s a complex topic, there are many very different kinds of drugs, behaviours, societal normals, scare campaigns and wildly-differing personal experiences. I don’t expect this matter to be settled by pure reasoning alone. Nor is it necessary to discuss this in the abstract.

What are the goals of the drug war? Ostensibly it’s to reduce overdoses, addiction, usage, drive up drug prices, reduce availability and deter usage and related criminal activity. Reasonable goals, I think most would agree.

Or as one of the top aids to Nixon, who launched the drug prohibition campaign, describes the policy:

“The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people,” former Nixon domestic policy chief John Ehrlichman told Harper’s writer Dan Baum …

“You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin. And then criminalizing both heavily, we could disrupt those communities,” Ehrlichman said. “We could arrest their leaders. raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.”

Did you fall for this campaign perhaps? Well you’re an idiot. But ignoring your simple gullibility, let’s forget about this interview and focus on the laudable goals mentioned previously. Reducing demand, access, affordability and overdoses.

cdcwonder2016_1

Opioids—prescription and illicit—are the main driver of drug overdose deaths. Opioids were involved in 33,091 deaths in 2015, and opioid overdoses have quadrupled since 1999.

 – US Center For Disease Control (CDC)

Okay, more people are dying from drugs, particularly opiates. So maybe we aren’t reducing overdoses. How about prices and purity?

Prices for hard drugs have fallen greatly since 1981, purity has risen. Source: Executive Office of the President, Office of National Drug Control Policy.

past-month-use-of-selected-drugs

Source: Substance Abuse and Mental Health Services Administration (SAMHSA), National Survey on Drug Use and Health

If you do any research you can easily see that overall demand for drugs has remained constant. People who want to do drugs will do them. All of the trillions of dollars spent on incarceration, interdiction, prosecution over fifty years have had little discernible impact on demand. Meanwhile fatal overdoses have greatly increased, hard narcotics prices have dropped dramatically. All kinds of drugs are trivially easy to obtain (according to my cool friends). Mexico is a failed state, run by narco-terrorists, and many cities are beset by gang violence for the simple reason that drugs are in great demand and quite illegal. Millions of lives have been ruined by the government in pursuit of getting people to not ruin their lives with drugs. So at what point does one look at the current policy and decide maybe it isn’t working out like the theory predicts? What’s the threshold for admitting maybe the last five decades of locking people up isn’t having the desired result?

Wikipedia: In October 2013, the incarceration rate of the United States of America was the highest in the world, at 716 per 100,000 of the national population. While the United States represents about 4.4 percent of the world’s population, it houses around 22 percent of the world’s prisoners.[1] Corrections (which includes prisons, jails, probation, and parole) cost around $74 billion in 2007 according to the U.S. Bureau of Justice Statistics.[2][3]

US incarceration timeline-clean

On what basis are we evaluating the effectiveness of this policy, if not based on medicine, price and availability, overdoses, demand, enriching of narco-terrorists and violent gangs, lives ruined, money wasted, or principles of justice?