Information vs. Encodings

A concept about modern computing that often confuses people is the difference between some piece of data and the encoding, or representation of that data.

Everyone knows computers use binary. They use 1s and 0s to store and manipulate information. Do they use binary numbers?

Computers can only store information as patterns of electrical switches, set in the “on” or “off” position. There is no such thing as a “binary” number, only a number that is encoded as a binary pattern. Numbers are information, and they don’t actually exist. We can write down Arabic numerals like “42”, or write it in base-2 as “101010”, but these are merely different ways of encoding the same number. It’s up to us to come up with a scheme of encoding information using whatever is available.

Humans have all used base-10 numbering systems throughout history because we have ten fingers. In Roman times people used Roman numerals, which were pretty clumsy and not especially well-suited for arithmetic or algebra. Later, Europeans switched to Arabic numerals (0-9) while keeping the Latin writing system (A-Z).

So the number 42 is still the same number whether it’s written as XLII, “forty-two”, 4️⃣2️⃣, 0x2A, etc. All represent the same number, just encoded different ways. It’s up to the person interpreting the encoding using a particular scheme to translate it from the written-down form into useful information.

This doesn’t apply to only numbers but text, audio, video, web pages, hard disks, subtitles, and anything else one may want to be able to store in some hard copy form and represent digitally. Assuming lossless encoding, a FLAC of a song is the same information as an AIFF of a song is the same information as a zipped WAV of a song. They all represent the same PCM audio data just in different formats.

This blog post is a bunch of dumb words that anyone who understands English can make some sense of, but it’s stored as a sequence of bytes using the UTF-8 encoding standard which is a way of storing Unicode glyphs as a sequence of bytes (byte = 8 bits, hence “UTF-8”). Unicode is a mapping of codepoints (numbers) to glyphs, with some fancy rules about combining glyphs and things. Unicode is not a format, there are different ways to encode the codepoints into a machine-processable format.

As far as computers are concerned you can only deal with bits, grouped into bytes. The most convenient way to store and retrieve any data from RAM or storage or over a network is a stream of bytes. If you want to represent some information in a computer, you need some encoding scheme to translate it to and from a stream of bytes. How you want to accomplish this can be entirely up to you. The information only has the meaning you choose to imbue it with.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s