If you’re using AWS CDK and Cognito, probably you want to have a test user account. I use one mainly for testing GraphQL queries and mutations in the AppSync console which requires you to provide a userpool username and password.
As a result of Russia’s invasion of Ukraine there have been a few lexical, orthographic, and semantic changes of note taking place in the Ukrainian language. They propagate alongside the flood of information, memes, and propaganda flowing over Ukrainian social media, primarily on telegram and Meta platforms. Some are more widespread than others, some may not last, but it’s curious to look at how war can change the perception of one’s neighbor in such a short period of time, with the language following along in changes in attitude.
To summarize information reported by the Ukrainian telegram channel “Gramota”:
Синонімічний ряд росія – московія тепер доповнили кацапстан, оркостан, мордор.
New synonyms for “russia”/”moscovia” (sic) are: “katsapstan”, “orcostan”, “mordor”. The latter synonyms are derived from the widespread Tolkeinian references to the invading army as a horde of orcs due to the poorly coordinated human wave attacks, slaughtering of civilians, and general disorder characteristic of the russian army. The army is also frequently referred to as “орда”, the Mongol horde which caused a great deal of destruction in the region in the past.
In addition the authors note the now somewhat commonplace writing of “russia”, “moscow”, “rf”, and “putin” in lowercase, even in some official media. (“Щоб продемонструвати свою зневагу, слова росія, москва, рф, путін ми стали писати з малої літери.”)
Not to be outdone, the Ukrainian Armed Forces suggested writing “russia” with an extra small “r” (“ₚосія”):
A new widely-used term to refer to russians of the putinist persuasion is “rashism” (рашизм) – a novel portmanteau of “russian” and “fascism”.
“The War in Ukraine Has Unleashed a New Word” – Timothy Snyder in New York Times Magazine
Змінили й правила граматики. Тепер принципово кажемо “на росії” у відповідь на їхнє “на Україні”.
This one is a little hard to explain but there are different prepositions that have been used to refer to being “in Ukraine” – during the Ukrainian SSR days when Ukraine was officially as a state inside the USSR the prefix “on” (на) was used with the locative or prepositional case with respect to Ukraine. After independence the appropriate prefix “in” (в) has been used to signify a distinct country instead of a region. Apparently some in russia still say “on Ukraine” to be disrespectful, so: “now we say ‘on russia’ in response to their ‘on Ukraine'”.
Нового значення з негативним забарвленням набуло дієслово спасати.
“A new meaning with a negative connotation was acquired by the verb ‘to save'”. As putin’s army came “to save” Ukraine from whatever it was supposedly saving them from, the word now has a sinister association.
Лягаючи спати, ми почали бажати спокійної ночі або тихої ночі. 🌙 Але тут ми не скалькували фразу “Спокойной ночи”. Це просто збіг. Ми вклали в неї свій зміст, переосмисливши значення спокою.
Going to bed, we began to wish each other a peaceful night or a quiet night. 🌙 But here we did not copy the (russian) phrase “Good night” (lit. “peaceful night”). It’s just a coincidence. We put our meaning into it, rethinking the meaning of peace.
Molotov cocktail recast as a Bandera smoothie (with a discussion on the gender of smoothie)
Three weeks of the unprovoked invasion of Ukraine by Russia has demonstrated the essential cruelty and homicidal nature of the Russian military and civilian leadership. Having discovered that their political goal of occupying the country and installing a puppet leader friendly to Moscow was going to be harder than anticipated, they have settled for wholesale slaughter of civilians with no particular goal other than that of terror and enlarging Russia’s territory.
As a genocide scholar I am an empiricist, I usually dismiss rhetoric. I also take genocide claims with a truckload of salt because activists apply it almost everywhere now.
Not now. There are actions, there is intent. It's as genocide as it gets. Pure, simple and for all to see
This is not the first or second time that the Muscovite government has attempted to erase the Ukrainian culture and people for the crime of being born on lands that Russia considers theirs to control. Much of the Russian-speaking populace of Eastern Ukraine are Russians that were moved into the region from Russia, while native Ukrainian families in the area were killed or forcibly relocated to Russia. This region with a higher density of Russian speakers is what Putin has used as a pretext to “protect” ethnic Russians from the violence in the region resulting from the Russian-backed separatists who revolted against the government in 2014. The banning of the Ukrainian culture in the Russian Empire and the deliberate death by starvation of millions of Ukrainians by Stalin were historical attempts at erasure still fresh in the Ukrainian cultural memory, along with recent injustices like Chernobyl and the annexation of Crimea and the Donbas, not to mention the twenty or so wars previously fought between Russia and Ukraine. Considering this it should have not been a surprise to the Kremlin when their invasion force was not welcomed with open arms as liberators.
Since the invasion failed to quickly occupy and control Kyiv, the parade uniforms brought with the soldiers were shelved and the standoff weaponry was hauled out as in previous Russian military campaigns against unwilling citizenry like in Syria and Chechnya. Due to their fear of entering hostile cities, the Russian military has been targeting critical civilian infrastructure for bombardment. Artillery, ballistic missiles, precision guided missiles, multiple-launch rocket systems, dumb bombs, smart bombs, and everything else that explodes has been lobbed into Ukrainian cities and towns that the Russian Horde comes upon. Hospitals, water treatment facilities, nuclear power plans, internet providers, mobile phone and TV towers, schools, government buildings of every sort, and residential buildings have all been targeted and blown to pieces.
This is a city on the Sea of #Azov and its name is Mariupol. It is named after the Virgin Mary, mother of Jesus Christ. 350 thousand residents of #Mariupol 18 days without water, food and electricity. Hundreds of air bombs hit the hero city. #Ukrainepic.twitter.com/FAl4vhfLks
Entire cities are now without electricity, internet, heat, or water. The strategy appears to be the same as in Aleppo and Grozny: murder and terrorize citizens until the city is no longer a point of resistance, due to surrender or complete razing, whichever comes first.
Bombed maternity hospital.
There are additional domains that the Kremlin is waging warn in: the cyber and information spaces. At the time of the invasion a new piece of malware was activated in Ukraine which was designed to permanently destroy all data stored on a computer. Not simply overwriting all files with garbage data but leveraging a signed disk driver to overwrite the master boot record and corrupting filesystem structures to make recovery impossible. A highly destructive and targeted attack launched right before the invasion.
Breaking. #ESETResearch discovered a new data wiper malware used in Ukraine today. ESET telemetry shows that it was installed on hundreds of machines in the country. This follows the DDoS attacks against several Ukrainian websites earlier today 1/n
And in a modus operandi that everyone should be familiar with now, the Russian government has been using every avenue of communication to get its false messages out regarding Ukraine. That the country is run by a “neo-Nazi junta” who seized power illegitimately, despite the free and fair democratic elections which elected a Jewish president.
When we say Kyiv is winning the information war, far too often we only mean information spaces we inhabit.
Pulling apart the most obvious RU info op to date (as we did using semantic modelling), very clear it is targeting BRICS, Africa, Asia. Not the West really at all. pic.twitter.com/GA5KUQo77S
Russia’s permanent representative to the United Nations Security Council called an emergency meeting to warn of the dangers posed by biological laboratories in Ukraine. In the meeting, Russia’s top UN ambassador claimed without evidence that Ukraine in conjunction with America was gathering DNA samples from Slavic peoples to create an avian delivery system for targeting Slavs with a biological weapon. I’m not making this up, you can watch the meeting yourself.
And in recent days Russian state media has been warning that Ukraine will use chemical weapons: “Ukrainian neo-Nazis are preparing provocations with the use of chemical substances to accuse Russia, the Ministry of Defense said.”
These are just a couple examples of a massive disinformation campaign coming out of the Kremlin. The claims are easily debunked. Ukraine is in compliance with biological safety inspections and only maintains facilities working with low-danger pathogens (BSL-1 and BSL-2) which are of no threat to anyone. Ukraine maintains no chemical or biological weapons research or weaponry, in contrast to Russia which operated the largest biological weapon program in world history and is infamous for using nerve agents to assassinate people in modern times, the only other country besides North Korea.
While the Russian government tries to convince the world of the aggressive and deadly nature of the Ukrainian threat, it is Russia that has invaded Ukraine and continues to deport and murder civilians and torture journalists in an effort to terrorize the country into submission. As of March 18th, the UN reported that about ten million civilians have fled their homes as a result of the war, with 6.5 million internally displaced and 3.2 million refugees fleeing to other countries. The UN High Commissioner for Human Rights recorded 1,900 civilian confirmed casualties in three weeks, with official Ukrainian estimates much higher.
In the House of Commons, MPs have unanimously adopted a motion to recognize that the Russian Federation is committing acts of genocide against the Ukrainian people. The motion was proposed by Heather McPherson, the NDP critic for foreign affairs.#cdnpolipic.twitter.com/z4WC44z9N5
In occupied Kherson, an illustrative example, it has been hard to get news out lately because the Russian military destroyed all means of telecommunications, confiscated cell phones, disabled the internet, and only allows citizens to watch Russian propaganda on TV. There are reports of Russian plans to stage a referendum in Kherson to annex the city by Russia, as was done in Crimea. The Crimean referendum only gave voters two choices: become an independent state or become part of Russia.
Many cities with millions of residents in Ukraine are being destroyed, as can be seen on this interactive map. Endless footage of civilian casualties can be seen on cell phone recordings taken on the ground by Ukrainians. The stories from cities under siege, like Mariupol, Kharkiv, Sumy, Mykolaiv, are the same. Indiscriminate bombardment of military and non-military targets alike, mostly the latter. Attacks on critical civilian infrastructure. Attempts to block electricity, internet access, food, water, and information from reaching the city. Rounding up and arresting Ukrainians critical of Russia.
The Russian plan appears to be to seize territory that it can, and erase territory that it cannot. The new political objective remains unclear.
Russia's deliberate destruction of Ukraine's food stores and grocery shops is painfully evident from @planet satellite imagery, with large grocery stores destroyed and deliberately targeted.
Last night, Russia struck the Retroville Mall near Kyiv, destroying most of it (including this smaller building but not only). At least four people died in this attack. pic.twitter.com/v7aIGMQmUQ
One of the silver linings of this terrible, unnecessary catastrophe is the fact that this ill-conceived invasion is the best documented in history. The dissemination of information about combat, forces, movements, official and unofficial statements, largely via Ukrainian telegram channels, is swift and unprecedented. Countries use their official twitter accounts to troll and mock belligerents. Soldiers and civilians on the ground post videos of them dressing down Russian conscript teenagers and borrowing occupying army hardware.
Huge caveat: the vast majority of what I see is from pro-Ukraine Telegram/Discord/Twitter, it’s only a few days into a massive operation, with major fog of war. This is absolutely not an accurate or complete picture of the war. But it is a darkly amusing one.
Here are a few choice quotes from professional analysts, military, and war nerds:
wait, how did you listen in to russian radio comms lol. Aren’t they supposed to be encrypted, not to mention off the internet..? Right?
It almost looked more as if they were trying to get as far they could down a road until they encountered a road block and were completely unworried about all the amazing angles people with cameras (which could just as easily be rifles) had on them. I’m kinda confused about wtf the idea behind that was too, maybe its just how things worked in syria…?
There are so many videos of russian troops within the cities in light armour and on foot its crazy. I havent seen this much yet. Could it be a sign that shelling is slowing down and they are entering next stage of their plan?
Its absolutely one of the most ill-executed military operations I have ever seen, they’ll use this war in military textbooks for generations to come as an instance of what not to do in strategic/tactical planning and execution.
What’s amazing is that this is fractally stupid – no matter what level you analyze the operation, from tactical to grand strategic, it’s mind-bogglingly stupid.
They’re definitely getting chewed if they enter actual urban combat. What the hell is this formation, military analysts are going to really be scratching their heads what is up with the military and it’s organisation.
Im listening to their comms, very chaotic. They get confused between each others. Also it seems that they are trying to fight ww2 style. Driving in between houses like some peasant with no radio to report while its 2022 and everyone is connected getting intel almost quicker then their radios and coordinating UA forces. And as someone said above their mistake make us think there hould be a logical yet bizarre explanation like “its a plan to outs putin and so on” but i think its their military doctrine not suited for 2022 I do not understand why they are running radios and maps instead of google maps or a chinese knock off. Like it is not like the americans aren’t watching you via satelite
Wow, rare video of the moment of engagement. They’re driving / walking right into ambushes across the city. What, are they trying to lose at this point? Literally feels like they have no control or awareness of the situation. What’s going on. They’re standing there to get shot… They are not very ready for urban fighting judging by that video.
https://twitter.com/RALee85/status/1497809352979361798 Does look like a recon group, not sure why a fuel tanker is driving with them though. That’s a city they’ve bypassed. They need fuel further up the line? That strikes me as a very… aggressive manuver to solve that problem. I have never seen a freaking patrol have a fueler attached Yeah, doesn’t seem like a patrol. I’m so confused honestly. Not sure why they seemed like they were stopping in a couple of different locations as well? An attempt at refueling that just got lost and accidentally drove straight through lines???
“We’re out of gas.”? The logistical problems are bad enough that they’ve… demechanized?
Let’s get into the best Ukrainian and NATO memes four days into this thing.
Latest from Telegram channels: In Ukraine, some local gopniks managed to find a lone Russian APC with the crew, so they beat them up, took the APC and the Russians and gave them to the Ukrainian army. That's one good Cheeki-Breeki right there. pic.twitter.com/2qoZY36hdX
Oh my… an Ukrainian villager again decided to bring back home an abandoned Russian military hardware but this time a 9K33 OSA surface-to-air missile system by tying it to his tractor. pic.twitter.com/9EeR1PA6V2
New (to me) dimension of crowdwork platforms: Russian military uses Premise microtasking platform to aim and calibrate fire during their invasion of Ukraine. Example tasks are to locate ports, medical facilities, bridges, explosion craters. Paying ¢0.25 to $3.25 a task. pic.twitter.com/kHTO2tSCUH
For some strange reason Russia offered to host negotiations at Gomel, in Belarus. Belarus is a belligerent in this war against Ukraine so it was an odd choice of location. Russian negotiating team was left to negotiate with themselves.
Zelensky press-secretary: we are aware that Russian delegation arrived in Homiel, we've indeed discussed talks there, but then Russia issued ultimatum that Ukrainian military should lay down the weapons, so there will be no Ukrainian delegation there https://t.co/WrlFGYBJTv
Zelensky is the number one target for the entire Russian military, spetznaz,FSB, GRU, Rosgvardia all out trying to get his ass, and he’s still making videos on the streets of Kyiv saying they are all here and not going anywhere. Imagine any American politician doing this. https://t.co/0j5fzovsWL
The sign is an obvious photoshop but one actually posted by the interior ministry to make the point.
!!! Ukraine's Interior ministry asked residents to take down street signs in order to confuse oncoming Russian troops. The state road-signs agency went one step further. (Roughly: all directions are to "go fuck yourselves") pic.twitter.com/8xVjceqRfx
Electronic roadsigns on the road from Boryspil airport – “russian ship – fuck off.”
Electrocar chargers in Moscow/St. Petersburg not working – "how can this be?" Reading "Glory to Ukraine"… "glory to heros"… "Putin fuck off"… "death to … er whatnow?" (Ukrainian word – "enemies") pic.twitter.com/8o4Pcl60wD
Putin reportedly has a $97 million luxury yacht called "Graceful". A group of Anonymous hackers on Saturday figured out a way to mess with maritime traffic data & made it look like the yacht had crashed into Ukraine's Snake Island, then changed its destination to "hell": pic.twitter.com/Ch53lcG7D6
A snapshot of the Russian economy: an investment expert goes live on air and says his current career trajectory is to work as "Santa Claus" and then drinks to the death of the stock market. With subtitles. pic.twitter.com/XiPVTSUuks
Some trolling from Ukraine: Head of Anti-corruption agency of Ukraine sent a letter to Russian defense minister Shoigu thanking him for embezzlement of Russian army pic.twitter.com/nIGQtSYnGz
Unlike many in the media and in the chattering classes, I have an acute need to keep up accurately with the “situation” going on between Russia and Ukraine, as my home is in Ukraine. I need to know if it’s safe to stay there or not, so I have been following developments closely. By which I do not mean watching CNN or spending much time reading the mainstream press, I mean following the events on the ground alongside statements, press, and propaganda from Russia, NATO members, the so-called DNR/LNR, Belarus (the most comical), Ukraine, and other interested parties. I’m able to do this thanks to a terrific OSINT discord in which there are of course randos like myself but also experienced intelligence analysts, military personnel, journalists, and people on the ground all around the region. Looking at satellite imagery, Tiktoks (there are dozens of videos posted every day in Russia and Belarus of troop and hardware movements), flights, news reports, press statements, diplomatic evacuations, and more.
So what’s going on? The TL;DR is that the situation is dangerous and the tension has only been building with no sign of de-escalation. While the media and politicians in the West have apparently been going bugfuck non-stop, some have suggested to distract from domestic issues, there are extremely valid reasons to be concerned that something up to and including a military invasion will happen. Many hybrid war elements including large-scale cyberattacks and misleading news have been ongoing and directed at Ukraine in recent days. Whether or not a full-scale military invasion will happen is only known by Putin at this point, but the alarm bells are being rung for good reasons.
Let me attempt to summarize why, starting with some publicly available military movements first:
While Russia and Belarus have announced the military exercises taking place, these exercises only represent a very small fraction of the forces that have been deployed. The forces deployed are mostly not in the regions the exercises have taken place, the scale of the build-up vastly exceeds the scope of the exercises.
The Russian Federation Baltic fleet has moved amphibious landing ships and submarines to the Black Sea, which was not scheduled.
Great numbers of units from the Eastern and Southern Military Districts have been relocated to the border.
Approximately 60% of Russia’s vast combined arms have moved to the Ukrainian border. The current estimates range from 140,000 troops to 180,000 troops split into 83 battalion tactical groups.
The U.S. intelligence community upgraded its warnings because of significant quantities of blood being moved to the field, where it has a shelf life of about three weeks. A precious resource, especially during covid, not normally used in exercises.
A large number of military hospital tents have been set up. Maybe for exercises but unlikely.
Recently Russian tanks have begun moving under their own power towards the border on city streets, tearing them up. Typically one does not destroy one’s own infrastructure during exercises.
Russia’s national guard Rosgvardia has been seen moving to the border. They would be expected to follow an incursion and secure newly-controlled territory.
Ramzan Kadyrov’s personal troops (“Sever” company) have been seen moving from Chechnya to the border. I would not want to meet them under any circumstances. Troops have been filmed boarding trains in Dagestan.
A massive array of S-300s, S-400s, with transloaders and missiles have amassed at the border with enough range to guarantee complete air supremacy.
A complement of Iskander ballistic systems accompany the troops. These would be used in any initial attack to neutralize airfields and for SEAD.
The 1st Guard Tanks Army has been forward deployed to Voronezh, on the border. These are the most elite ground troops Russia has, earmarked for general staff, and would comprise the tip of the spear of any invasion.
Russian troops and hardware is not only in training grounds but have been moved to forward operating bases, and actively deployed in the field. Given the snow, mud, and shitty conditions, it’s very unlikely that this posture can be kept up indefinitely.
Russia has stated that troops are moving away from the border and returning to bases after the completion of exercises. This is demonstrably false, as they have moved closer to the border and at least 7,000 additional troops have appeared in the last couple of days.
In the last couple days there has been a significant increase in artillery fire in the Donbass, reportedly mostly coming from the Russian side, likely attempting to provoke a reaction that can be used as a pretext for invasion.
In short, all of these elements do not necessarily mean there will be an invasion of Ukraine in the near term, but if one was about to take place this is precisely what one would expect to see preceding a large-scale invasion. If it’s a ruse it’s an extremely convincing one.
But the military posture is not the only cause for concern. The buildup of troops and hardware is one precondition, but it would be expected to be preceded by hybrid information war and cyber attacks. These have been dramatically scaled up since the 15th of February:
Multiple banks were taken offline at the same time. I was unable to log into my bank because the authentication server was offline.
The ministries of the interior and defense and the president’s website were taken offline. The A record for mil.gov.ua vanished and was unresolvable by CloudFlare.
The gov.ua DNS service sustained a 60GBps+ DDoS attack.
Many Ukrainians were sent SMS messages advising them to withdraw money from ATMs as soon as possible.
Russian news has been pumping out false or greatly exaggerated stories of mass graves, Nazi death squads, active genocides, and preparations for invasion of Donbass by Ukrainians. Any and every possible pretext for a Russian invasion has been floated in the media by official sources, LNR/DNR media, Belarussian sources.
In addition, in the past couple months:
The main general-purpose citizen mobile app (Дія) which is used for tax records, ID, Covid certification and other functions was hacked and the personal records of most Ukrainian citizens and residents was posted for sale on the darkweb.
Car insurance records on finances and addresses of many Ukrainians were stolen.
Around a terabyte of emails and documents from various ministries was reportedly stolen and published.
These are just a few selected observations out of many that I’ve seen go by. This is all based on open-source intelligence. The most urgent warnings have been coming from the U.S., U.K., Canada, and Australia. This is notable because they are four of the five eyes, countries with access to the most advanced and exceptional signals intelligence. More recently, Israel has been loudly sounding the alarm and increasing El Al flights trying to evacuate Israelis from Ukraine. Many have pointed out that when Israel is concerned, it’s worth taking notice.
People in the media with less, let’s say, granular accounts, have been quoting unnamed intelligence officials about specific dates and times of an invasion. I would not put too much stock in such reports because such reports are often of a more propagandistic nature. But I would very much look at the facts on the ground in conjunction with more official warnings. These official warnings have not predicted a specific date of invasion, only about the date upon which an invasion would be completely ready to go. It’s reasonable to be skeptical of these reports, but I believe they are not just pure fabrication or without basis in intelligence, publicly available or otherwise. There is a hypothetical argument to be made that by the leaking of intercepts and intelligence assessments, the U.S. has caused Putin to reconsider plans for invasion. This is a possibility, one of many, but one we cannot know today. Maybe in 15 or 20 years we’ll be able to look back and see what really happened in these times and know. Perhaps it is all merely military exercises, perhaps it is a move to permanently station Russian forces in Belarus, perhaps it was an attempt at diplomacy that failed(?), perhaps it was to intimidate Ukraine into accepting the Minsk agreements. It is clear that these maneuvers were many months or years in planning, executed at great expense, and not merely ordinary troop movements. There was a deliberate effort here to achieve something, opaque as that something may be at this moment.
What could the goal of these efforts be? Some say it is a bluff by Putin, to secure concessions from NATO and the U.S. by scaring everyone into thinking they will launch an attack on Ukraine in case their demands are not met. It’s no secret that the Russian Federation feels existentially concerned about the expansion of NATO, an explicitly anti-Russian alliance. They feel that the U.S.’s claims of upholding a rules-based international order and the sanctity of internationally recognized borders are laughably false. Sadly it must be admitted that they have a point. From the NATO bombing of Serbia and recognition of Kosovo, to the illegal wars of aggression in Iraq and Afghanistan, and more recently the covert and overt military interventions in places like Libya and Syria by the U.S. obviously run counter to the stated values and norms that are supposed to be so inviolable and non-negotiable. As an American I truly wish my government had more credibility and moral high ground here. Anyone who doesn’t have amnesia can see how hypocritical much of the moral posturing is, and Russia will play this up to the greatest extent possible.
However I am skeptical that this massive, expensive, extraordinary military buildup and active hybrid warfare aimed at Ukraine is purely about securing agreement from the U.S. and NATO. This is because their demands, given in writing, were clearly impossible to meet and Putin doubtlessly knew this. There is zero reason to believe Russia seriously expected NATO to kick out all of the members who joined since 1997. They also know that Ukraine is not going to be joining NATO anytime soon because of the active conflict in Donbass, among other reasons. The negotiations have been an obvious farce, so what would be the point of a bluff? If it is a bluff of an imminent attack, it certainly may be the most elaborate and convincing in all of modern history. No one hopes more than me that an invasion will not take place, and I think it unlikely that bombs will start falling on Kyiv, but I need to assess the situation rationally. Even if the risk is small, is it worth staying in Ukraine right now as all this is happening? Would you?
As to why former USSR countries desperately want to be a part of NATO, this is left as an exercise for the reader. In my personal opinion the only peaceful and lasting solution to this larger conflict would be for NATO to offer a path to Russia to join, with preconditions on a more democratic political system. This would take all of the wind out of Putin’s sails, prove that NATO is not purely an anti-Russia military alliance, and provide an avenue for political pressure to push the country in a positive direction as offering NATO and EU membership to other countries has done.
On at least one point, Russia has been consistent and persistent: that Ukraine must implement the Minsk agreements, which were signed as a ceasefire in 2015, under extreme duress. Russia’s interpretation of the agreements would effectively give Russian-backed separatists in the Donbass seats in parliament and political control and vetos on Ukraine’s foreign policy. Such an agreement, essentially signed at the time with a gun to their heads, is unimplementable in Kyiv today. Any government implementing Russia’s interpretation would be gone within a week, probably violently. Too many Ukrainians have fought and died to give power of their country over to Russia. Russia knows this and continues to push for it because they can say they are just trying to address the situation diplomatically. It is dreadfully cynical.
Another relevant agreement which Russia is not quick to bring up is the Budapest Memorandum, which was an agreement signed in 1994 by the U.S., U.K., Russia, Ukraine and others guaranteeing freedom from aggression and violations of borders in exchange for Ukraine giving up its nuclear weapons. To quote Wikipedia:
On 4 March 2014, the Russian president Vladimir Putin replied to a question on the violation of the Budapest Memorandum, describing the current Ukrainian situation as a revolution: “a new state arises, but with this state and in respect to this state, we have not signed any obligatory documents.” Russia stated that it had never been under obligation to “force any part of Ukraine’s civilian population to stay in Ukraine against its will.” Russia tried to suggest that the US was in violation of the Budapest Memorandum and described the Euromaidan as a US-instigated coup.
At the UN Security Council meeting in January on the Russian military buildup, the Russian ambassador blasted a shotgun of non-sequiturs ranging from Colin Powell’s evidence of WMDs in Iraq, the “CIA-backed color revolution installing Nazis in power” in the Maidan revolution (please don’t let me catch you repeating this profoundly inaccurate propaganda, even if you heard it repeated on your lefty podcasts), and “Ukrainian aggression” against Russian-speaking peoples. Following this verbal assault he regretfully excused himself because of an unmovable prior commitment, as the Ukrainian ambassador was about to begin his remarks. Since this, Russian ministers have been asserting the need to intervene in the event of attacks on Russian speakers in Ukraine in the event of genocide, this propaganda being pushed by state news agencies such as RIA Novosti in the past few days. The false narratives being constantly put out by state-owned media in Russia about the atrocities being committed in Ukraine have been reaching a fever pitch. If you think the media in the West is hysterical, you should see what they’re saying on Russian TV.
How to invade and split up Ukraine, on Russia 1.
Some say that Russia has done considerable damage against Ukraine without an invasion, and this is indeed true. The economic and human costs since 2014 but particularly in recent weeks has been enormous. Over 14,000 lives have been lost in the conflict, many flights over Ukrainian airspace have been canceled because insurance companies refuse to insure flights to and over Ukraine, remembering the MH-17 tragedy early in the war when a civilian airline was shot down with Russian weaponry. Billions of dollars in economic damage is being done to the Ukrainian economy, tourism is basically canceled.
Foreign ministries from the U.S., U.K., Australia, Sweden, Finland, Israel, Germany, Italy, UAE, Kuwait, Japan, Lithuania, and many other countries have told their citizens to leave immediately in no uncertain terms.
The U.S. embassy in Kyiv has been deactivated, the computers destroyed, and the staff evacuated to Lviv or outside the country. The Russian embassy was seen burning something today, most of its members evacuated as well. Some extremely VIP personnel were seen driving in black SUVs to the Polish border, running to a black hawk helicopter with a medevac callsign, and then quickly whisked away.
Difficult to understand how heavy the fighting has been in eastern Ukraine today until you compare it to the rest of the year so far.
The Ukrainian MoD reported more ceasefire violations today than they did for all of January. pic.twitter.com/JBVOobNcX9
The people who have it the worst are the poor residents of the Donbass. This morning a pre-school was shelled, with three staff injured. Ukraine isn’t even the real concern of Russia, NATO is. But here we are, caught in the middle as usual. Ukrainians don’t want to be pawns in some madman’s game, just to live in peace.
Web3 is: read/write/execute with artificial scarcity and cryptographic identity. Should you care? Yes.
What?
Let’s break it down.
Back when I started my career, “web2.0” was the hot new thing.
What?
The “2.0” part of it was supposed to capture a few things: blogs, rounded corners on buttons and input fields, sharing of media online, 4th st in SOMA. But what really distinguished it from “1.0” was user-generated content. In the “1.0” days if you wanted to publish content on the web you basically had to upload an HTML file, maybe with some CSS or JS if you were a hotshot webmaster, to a server connected to the internet. It was not a user-friendly process and certainly not accessible to mere mortals.
The user-generated content idea was that websites could allow users to type stuff in and then save it for anyone to see. This was mostly first used for making blogs like LiveJournal and Moveable Type possible, later MySpace and Facebook and Twitter and wordpress.com where I’m still doing basically the same thing as back then. I don’t have to edit a file by hand and upload it to a server. You can even leave comments on my article! This concept seems so mundane to us now but it changed the web into an interactive medium where any human with an internet connection and cheap computer can publish content to anyone else on the planet. A serious game-changer, for better or for worse.
If you asked most people who had any idea about any of this stuff what would be built with web 2.0 they would probably have said “blogs I guess?” Few imagined the billions of users of YouTube, or grandparents sharing genocidal memes on Facebook, or TikTok dances. The concept of letting normies post stuff on the internet was too new to foresee the applications that would be built with it or the frightful perils it invited, not unlike opening a portal to hell.
Web3
The term “web3” is designed to refer to a similar paradigm shift underway.
Before getting into it I want to address the cryptocurrency hype. Cryptocurrency draws in a lot of people, many of dubious character qualities, that are lured by stories of getting rich without doing any work. This entire ecosystem is a distraction, although some of the speculation is based on organizations and products which may or may not have actual value and monetizable utility at some point in the present or future. This article is not about cryptocurrency, but about the underlying technologies which can power a vast array of new technologies and services that were not possible before. Cryptocurrency is just the first application of this new world but will end up being one of the most boring.
What powers the web3 world? What underlies it? With the help of blockchain technology a new set of primitives for building applications is becoming available. I would say the key interrelated elements are: artificial scarcity, cryptographic identity, and global execution and state. I’ll go into detail what I mean here, although to explain these concepts in detail in plain English is not trivial so I’m going to skip over a lot.
Cryptographic identity: your identity in web3-land consists of what is called a “keypair” (see Wikipedia), also known as a wallet. The only thing that gives you access to control your identity (and your wallet) is the fact that you are in physical or virtual possession of the “private key” half of the keypair. If you hold the private key, you can prove to anyone who’s asking that you own the “public key” associated with it, also known as your wallet address. So what?
Your identity is known to the world as your public key, or wallet address. There is an entire universe of possibilities that this opens up because only you, the holder of your private key, can prove that you own that identity. To list just a short number of examples:
No need to create a new account on every site or app you use.
No need for relying on Facebook, Google, Apple, etc to prove your identity (unless you want to).
People can encrypt messages for you that only you can read, without ever communicating with you, and post the message in public. Only the holder of the private key can decrypt such messages.
Sign any kind of message, for example voting over the internet or signing contracts.
Strong, verifiable identity. See my e-ID article for one such example provided by Estonia.
Anonymous, throwaway identities. Create a new identity for every site or interaction if you want.
Link any kind of data to your identity, from drivers licenses to video game loot. Portable across any application. You own all the data rather than it living on some company’s servers.
Be sure you are always speaking to the same person. Impossible to impersonate anyone else’s identity without stealing their private key. No blue checkmarks needed.
Illustration from Wikipedia.
There are boundless other possibilities opened up with cryptographic identity, and some new pitfalls that will result in a lot of unhappiness. The most obvious is the ease with which someone can lose their private key. It is crucial that you back yours up. Like write the recovery phrase on a piece of paper and put it in a safe deposit box. Brace yourself for a flood of despairing clickbait articles about people losing their life savings when their computer crashes. Just as we have banks to relieve us of the need to stash money under our mattresses, trusted (and scammer) establishments with customer support phone numbers and backups will pop up to service the general populace and hold on to their private keys.
Artificial scarcity: this one should be the most familiar by now. With blockchain technology came various ways of limiting the creation and quantity of digital assets. There will only ever be 21 million bitcoins in existence. If your private key proves you own a wallet with some bitcoin attached you can turn it into a nice house or lambo. NFTs (read this great deep dive explaining WTF a NFT is) make it possible to limit ownership of differentiated unique assets. Again we’re just getting started with the practical applications of this technology and it’s impossible to predict what this will enable. Say you want to give away tickets to an event but only have room for 100 people. You can do that digitally now and let people trade the rights. Or resell digital movies or video games you’ve purchased. Or the rights to artwork. Elites will use it for all kinds of money laundering and help bolster its popularity.
Perhaps you require members of your community to hold a certain number of tokens to be a member of the group, as with Friends With Benefits to name one notable example. If there are a limited number of $FWB tokens in existence, it means these tokens have value. They can be transferred or resold from people who aren’t getting a lot out of their membership to those who more strongly desire membership. As the group grows in prestige and has better parties the value of the tokens increases. As the members are holders of tokens it’s in their shared interest to increase the value the group provides its members. A virtuous cycle can be created. Governance questions can be decided based on the amount of tokens one has, since people with more tokens have a greater stake in the project. Or not, if you want to run things in a more equitable fashion you can do that too. Competition between different organizational structures is a Good Thing.
This concept is crucial to understand and so amazingly powerful. When it finally clicked for me is when I got super excited about web3. New forms of organization and governance are being made possible with this technology.
The combination of artificial scarcity, smart contracts, and verifiable identity is a super recipe for new ways of organizing and coordinating people around the world. Nobody knows the perfect system for each type of organization yet but there will be countless experiments done in the years to come. No technology has more potential power than that which coordinates the actions of people towards a common goal. Just look at nation states or joint stock companies and how they’ve transformed the world, both in ways good and bad.
The tools and procedures are still in their infancy, though I strongly recommend this terrific writeup of different existing tools for managing these Decentralized Autonomous Organizations (DAOs). Technology doesn’t solve all the problems of managing an organization of course, there are still necessary human layers and elements and interactions. However some of the procedures that have until now rested on an reliable and impartial legal system (something most people in the world don’t have access to) for the management and ownership of corporations can now be partially handled not only with smart contracts (e.g. for voting, enacting proposals, gating access) but investment, membership, and participation can be spread to theoretically anyone in the world with a smartphone instead of being limited to the boundaries of a single country and (let’s be real) a small number of elites who own these things and can make use of the legal system.
Any group of like-minded people on the planet can associate, perhaps raise investment, and operate and govern themselves as they see fit. Maybe for business ventures, co-ops, nonprofits, criminal syndicates, micro-nations, art studios, or all sorts of new organizations that we haven’t seen before. I can’t predict what form any of this will take but we have already seen the emergence of DAOs with billions of dollars of value inside them and we’re at the very, very early stages. This is what I’m most juiced about.
Check out the DAO Dashboard. This is already happening and it’s for real.
And to give one more salient example: a series of fractional ownership investments can be easily distributed throughout the DAO ecosystem. A successful non-profit that sponsors open source development work, Gitcoin, can choose to invest some of its GTC token in a new DAO it wants to help get off the ground, Developer DAO. The investment proposal, open for everyone to see and members to vote on, would swap 5% of the newly created Developer DAO tokens (CODE being the leading symbol proposal right now) for 50,000 GTC tokens, worth $680,000 at the time of writing. Developer DAO plans to use this and other funds raised to sponsor new web3 projects acting as an incubator that helps engineers build their web3 skills up for free. Developer DAO can invest its own CODE tokens in new projects and grants, taking a similar fraction of token ownership in new projects spun off by swapping CODE tokens. In this way each organization can invest a piece of itself in new projects, each denominated in their own currency which also doubles as a slice of ownership. It’s like companies investing shares of their own stock into new ventures without having to liquidate (liquidity can be provided via Uniswap liquidity pools). In this case we’re talking about an organic constellation of non-profit and for-profit ventures all distributing risk, investment capital, and governance amongst themselves with minimal friction that anyone in the world can participate in.
Global execution and state: there are now worldwide virtual machines, imaginary computers which can be operated by anyone and the details of their entire history, operations, and usage is public. These computers can be programmed with any sort of logic and the programs can be uploaded and executed by anyone, for a fee. Such programs today are usually referred to as smart contracts although that is really just one possible usage of this tool. What will people build with this technology? It’s impossible to predict at this early age, like imagining what smartphones will look like when the PC revolution is getting started.
These virtual machines are distributed across the planet and are extremely resilient and decentralized. No one person or company “owns” Ethereum (to use the most famous example) although there is a DAO that coordinates the standards for the virtual machine and related protocols. When a new proposal is adopted by the organization, the various software writers update their respective implementations of the Ethereum network to make changes and upgrades. It’s a voluntary process but one that works surprisingly well, and is not unlike the set of proposals and standards for the internet that have been managed for decades by the Internet Engineering Task Force (IETF).
Also worth mentioning are zero-knowledge proofs which can enable privacy, things like anonymizing transactions and messaging. Of course these will for sure be used to nefarious ends, but they also open up possibilities for fighting tyranny and free exchange of information. Regardless of my opinion or anyone else’s, the cat’s out of the bag and these will be technologies that societies will need to contend with.
Why should I care?
I didn’t care until recently, a month ago maybe. When I decided to take a peek to see what was going on in the web3 space, I found a whole new world. There are so many engineers out there who have realized the potential in this area, not to mention many of the smartest investors and technologists. The excitement is palpable and the amount of energy in the community is invigorating. I joined the Developer DAO, a new community of people who simply want to work on cool stuff together and help others learn how to program with this new technology. Purely focused on teaching and sharing knowledge. People from all over the world just magically appear and help each other build projects, not asking for anything in return. If you want to learn more about the web3 world you could do a lot worse than following @Developer_DAO on twitter.
As with all paradigm shifts, some older engineers will scoff and dismiss the new hotness as a stupid fad. There were those who pooh-poohed personal computers which could never match the power and specialized hardware of mainframes, those who mocked graphical interfaces as being for the weak, a grumpy engineer my mother knew who said the internet is “just a fad”, and people like Oracle’s CEO Larry Ellison saying the cloud is just someone else’s computer. Or me, saying the iPhone looks like a stupid idea.
The early phase of web3 is cryptocurrencies and blockchains (“layer 1”) solutions. Not something that non-technical people or really anyone can take full advantage of because there are few interfaces to interact with it. In the phase we’re in right now developer tools and additional layers of abstraction (“layer 2”) are starting to become standardized and accessible, and it’s just now starting to become possible to build web3 applications with user interfaces. Very soon we’ll start to see new types of applications appearing, to enable new kinds of communities, organizations, identity, and lots more nobody has dreamed up yet. There will be innumerable scams, a crash like after the first web bubble, annoying memesters and cryptochads. My advice is to ignore the sideshows and distractions and focus on the technology, tooling, and communities that weren’t possible until now and see what creative and world-changing things people build with web3.
The defi revolution is in full swing if you know where to look. Seriousefforts to build out and improve the underlying infrastructure for smart contracts as well as applications, art, and financial systems are popping up almost every week it seems. They use their own native tokens to power their networks, games, communities, transactions, NFTs and things that haven’t been thought up yet. As more decentralizated autonomous organizations (DAOs) track their assets, voting rights, and ownership stakes on-chain the market capitalization of tokens will soon be measured in trillions of dollars.
Avalanche is one new token of many that is an example of how new tokens can garner substantial support and funding if the community deems the project worthy.
There are as many potential uses for crypto tokens as there are for fiat money, except tokens in a sense “belong” to these projects and shared endeavours. If enough hype is built up, masses of people may speculate to the tune of hundreds of billions of dollars that the value of the tokens will increase. While many may consider their token purchases to be long-term investments in reputable projects with real utility, sometimes coming with rights or dividend payments, I believe a vast majority of people are looking to strike it rich quick. And some certainly have. The idea that you can get in early on the right coin and buy at a low price, and then sell it to someone not as savvy later on for way more money is a tempting one. Who doesn’t want to make money without doing any real work? I sure do.
Quickstart
If you want to skip all of the explanations and look at code you can run, you can download the JupyerLab Notebook that contains all of the code for creating and optimizing a strategy.
Now for some background.
Trading and Volatility
These tokens trade on hundreds of exchanges around the world from publicly-held and highly regulated Coinbase to fly-by-night shops registered in places like the Seychelles and Cayman. Traders buy and sell the tokens themselves as well as futures and leveraged tokens to bet on price movement up and down, lending tokens for other speculators to make leveraged bets, and sometimes actively coordinating pump and dump campaigns on disreputable discords. Prices swing wildly for everything from the most established and institutionally supported Bitcoin to my own MishCoin. This volatility is an opportunity to make money.
With enough patience anyone can try to grab some of these many billions of dollars flowing through the system by buying low and selling higher. You can do it on the timeframe of seconds or years, depending on your style. While many of the more mainstream coins have a definite upwards trend, all of them vacillate in price on some time scale. If you want to try your hand at this game what you need to do is define your strategy: decide what price movement conditions should trigger a buy or a sell.
Since it’s impossible to predict exactly how any coin will move in price in the future this is of course based on luck. It’s gambling. But you do have full control over your strategy and some people do quite well for themselves, making gobs of money betting on some of the stupidest things you can imagine. Some may spend months researching companies behind a new platform, digging into the qualifications of everyone on the team, the problems they’re trying to solve, the expected return, the competitive landscape, technical pitfalls and the track record of the founders. Others invest their life savings into an altcoin based on chatting to a pilled memelord at a party.
Automating Trading
Anyone can open an account at an exchange and start clicking Buy and Sell. If you have the time to watch the market carefully and look for opportunities this can potentially make some money, but it can demand a great deal of attention. And you have to sleep sometime. Of course we can write a program to perform this simple task for us, as long as we define the necessary parameters.
I decided to build myself a crypto trading bot using python and share what I learned. It was not so much a project for making real money (right now I’m up about $4 if I consider my time worth nothing) as a learning experience to tech myself more about automated trading and scientific python libraries and tools. Let’s get into it.
To create a bot to trade crypto for yourself you need to do the following steps:
Get an API key for a crypto exchange you want to trade on
Define, in code, the trading strategy you wish to use and its parameters
Test your strategy on historical data to see if it would have hypothetically made money had your bot been actively trading during that time (called “backtesting”)
Set your bot loose with some real money to trade
Let’s look at how to implement these steps.
Interfacing With an Exchange
To connect your bot to an exchange to read crypto prices, both historical and real-time, you will need an API key for the exchange you’ve selected.
Fortunately you don’t need to use a specialized library for your exchange because there is a terrific project called CCXT (Crypto Currency eXchange Trading library) which provides an abstraction layer to most exchanges (111 at the the time of this writing) in multiple programming languages.
It means our bot can use a standard interface to buy and sell and fetch the price ticker data (this is called “OHLCV” in the jargon – open/high/low/close/volume data) in an exchange-agnostic way.
Now, the even better news it that we don’t really even have to use CCXT directly and can use a further abstraction layer to perform most of the grunt work of trading for us. There are a few such trading frameworks out there, I chose to build my bot using one called PyJuque but feel free to try others and let me know if you like them. What this framework does for you is provide the nuts and bolts of keeping track of open orders, buying and selling when certain triggers are met. It also provides backtesting and test-mode features so you can test out your strategy without using real money. You still need to connect to your exchange though in order to fetch the OHLCV data.
How much money to start with (in terms of the quote, so if you’re trading BTC/USD then this value will be in USD)
What fraction of the starting balance to commit in each trade
How far below the current price to place a buy order when a “buy” signal is triggered by your strategy
How much you want the price to go up before selling (aka “take profit” aka “when to sell”)
When to sell your position if the price drops (“stop loss”)
What strategy to use to determine when buy signals get triggered
Selecting a Strategy
Here we also have good news for the lazy programmers such as myself: there is a venerable library called ta-lib that contains implementations of 200 different technical analysis routines. It’s a C library so you will need to install it (macOS: brew install ta-lib). There is a python wrapper called pandas-ta.
All you have to do is pick a strategy that you wish to use and input parameters for it. For my simple strategy I used the classic “bollinger bands” in conjunction with a relative strength index (RSI). You can pick and choose your strategies or implement your own as you see fit, but ta-lib gives us a very easy starting point. A future project could be to automate trying all 200 strategies available in ta-lib to see which work best.
Tuning Strategy Parameters
The final step before letting your bot loose is to configure the bot and strategy parameters. For the bollinger bands/RSI strategy we need to provide at least the slow and fast moving average windows. For the general bot parameters noted above we need to decide the optimal buy signal distance, stop loss price, and take profit percentage. What numbers do you plug in? What work best for the coin you want to trade?
Again we can make our computer do all the work of figuring this out for us with the aid of an optimizer. An optimizer lets us find the optimum inputs for a given fitness function, testing different inputs in multiple dimensions in an intelligent fashion. For this we can use scikit-optimize.
To use the optimizer we need to provide two things:
The domain of the inputs, which will be reasonable ranges of values for the aforementioned parameters.
A function which returns a “loss” value between 0 and 1. The lower the value the more optimal the solution.
from skopt.space import Real, Integer
from skopt.utils import use_named_args
# here we define the input ranges for our strategy
fast_ma_len = Integer(name='fast_ma_len', low=1.0, high=12.0)
slow_ma_len = Integer(name='slow_ma_len', low=12.0, high=40.0)
# number between 0 and 100 - 1% means that when we get a buy signal,
# we place buy order 1% below current price. if 0, we place a market
# order immediately upon receiving signal
signal_distance = Real(name='signal_distance', low=0.0, high=1.5)
# take profit value between 0 and infinity, 3% means we place our sell
# orders 3% above the prices that our buy orders filled at
take_profit = Real(name='take_profit', low=0.01, high=0.9)
# if our value dips by this much then sell so we don't lose everything
stop_loss_value = Real(name='stop_loss_value', low=0.01, high=4.0)
dimensions = [fast_ma_len, slow_ma_len, signal_distance, take_profit, stop_loss_value]
def calc_strat_loss(backtest_res) -> float:
"""Given backtest results, calculate loss.
Loss is a measure of how badly we're doing.
"""
score = 0
for symbol, symbol_res in backtest_res.items():
symbol_bt_res = symbol_res['results']
profit_realised = symbol_bt_res['profit_realised']
profit_after_fees = symbol_bt_res['profit_after_fees']
winrate = symbol_bt_res['winrate']
if profit_after_fees <= 0:
# failed to make any money.
# bad.
return 1
# how well we're doing (positive)
# money made * how many of our trades made money
score += profit_after_fees * winrate
if score <= 0:
# not doing so good
return 1
# return loss; lower number is better
return math.pow(0.99, score) # clamp 1-0
@use_named_args(dimensions=dimensions)
def objective(**params):
"""This is our fitness function.
It takes a set of parameters and returns the "loss" - an objective single scalar to minimize.
"""
# take optimizer input and construct bot with config - see notebook
bot_config = params_to_bot_config(params)
backtest_res = backtest(bot_config)
return calc_strat_loss(backtest_res)
Once you have your inputs and objective function you can run the optimizer in a number of ways. The more iterations it runs for, the better an answer you will get. Unfortunately in my limited experiments it appears to take longer to decide on what inputs to pick next with each iteration, so there may be something wrong with my implementation or diminishing returns with the optimizer.
Asking for new points to test gets slower as time goes on. I don’t understand why and it would be nice to fix this.
The package contains various strategies for selecting points to test, depending on how expensive your function should be. If the optimizer is doing a good job exploring the input space you should hopefully see loss trending downwards over time. This represents more profitable strategies being found as time goes on.
After you’ve run the optimizer for some time you can visualize the search space. A very useful visualization is to take a pair of parameters to see in two dimensions the best values, looking for ranges of values which are worth exploring more or obviously devoid of profitable inputs. You can use this information to adjust the ranges on the input domains.
The green/yellow islands represent local maxima and the red dot is the global maximum. The blue/purple islands are local minima.
You can also visualize all combinations of pairs of inputs and their resulting loss at different points:
Note that the integer inputs slow_ma_len and fast_ma_len have distinct steps in their inputs vs. the more “messy” real number inputs.
After running the optimizer for a few hundred or thousand iterations it spits out the best inputs. You can then visualize the buying and selling the bot performed during backtesting. This is a good time to sanity-check the strategy and see if it appears to be buying low and selling high.
Run the Bot
Armed with the parameters the optimizer gave us we can now run our bot. You can see a full script example here. Set SIMULATION = False to begin trading real coinz.
Trades placed by the bot.
All of the code to run a functioning bot and a JupyterLab Notebook to perform backtesting and optimization can be found in my GitHub repo.
I want to emphasize that this system does not comprise any meaningfully intelligent way to automatically trade crypto. It’s basically my attempt at a trader “hello world” type of application. A good first step but nothing more than the absolute basic minimum. There is vast room for improvement, things like creating one model for volatility data and another for price spikes, trying to overcome overfitting, hyperparameter optimization, and lots more. Also be aware you will need a service such as CoinTracker to keep track of your trades so you can report them on your taxes.
Since we have (mostly) advanced beyond CGI scripts and PHP the default tool many people reach for when building a web application is a framework. Like drafting a standard legal contract or making a successful Hollywood film, it’s good to have a template to work off of. A framework lends structure to your application and saves you from having to reinvent a bunch of wheels. It’s a solid foundation to build on which can be a substantial “batteries included” model (Rails, Django, Spring Boot, Nest) or a lightweight “slap together whatever shit you need outta this” sort of deal (Flask, Express).
Foundations can be handy.
The idea of a web framework is that there are certain basic features that most web apps need and that these services should be provided as part of the library. Nearly all web frameworks will give you some custom implementation of some or all of:
Configuration
Logging
Exception trapping
Parsing HTTP requests
Routing requests to functions
Serialization
Gateway adaptor (WSGI, Rack, WAR)
Middleware architecture
Plugin architecture
Development server
There are many other possible features but these are extremely common. Just about every framework has its own custom code to route a parsed HTTP request to a handler function, as in “call hello() when a GET request comes in for /hello.”
There are many great things to say about this approach. The ability to run your application on any sort of host from DigitalOcean to Heroku to EC2 is something we take for granted, as well as being able to easily run a web server on your local environment for testing. There is always some learning curve as you learn the ins and outs of how you register a URL route in this framework or log a debug message in that framework or add a custom serializer field.
But maybe we shouldn’t assume that our web apps always need to be built with a framework. Instead of being the default tool we grab without a moment’s reflection, now is a good time to reevaluate our assumptions.
Serverless
What struck me is that a number of the functions that frameworks provide are not needed if I go all-in on AWS. Long ago I decided I’m fine with Bezos owning my soul and acceded to writing software for this particular vendor, much as many engineers have built successful applications locked in to various layers of software abstraction. Early programmers had to decide which ISA or OS they wanted to couple their application to, later we’re still forced to make non-portable decisions but at a higher layer of abstraction. My python or JavaScript code will run on any CPU architecture or UNIX OS, but features from my cloud provider may restrict me to that cloud. Which I am totally fine with.
I’ve long been a fan of and written about serverless applications on this blog because I enjoy abstracting out as much of my infrastructure as possible so as to focus on the logic of my application that I’m interested in. My time is best spent concerning myself with business logic and not wrangling containers or deployments or load balancer configurations or gunicorn.
I’ve had a bit of a journey over the years adopting the serverless mindset, but one thing has been holding me back and it’s my attachment to web frameworks. While it’s quite common and appropriate to write serverless functions as small self-contained scripts in AWS Lambda, building a larger application in this fashion feels like trying to build a house without a foundation. I’ve done considerable experimentation mostly with trying to cram Flask into Lambda, where you still have all the comforts of your familiar framework and it handles all the routing inside a single function. You also have the flexibility to easily take your application out of AWS and run it elsewhere.
There are a number of issues with the approach of putting a web framework into a Lambda function. For one, it’s cheating. For another, when your application grows large enough the cold start time becomes a real problem. Web frameworks have the side-effect of loading your entire application code on startup, so any time a request comes in and there isn’t a warm handler to process it, the client must wait for your entire app to be imported before handling the request. This means users occasionally experience an extra few seconds of delay on a request, not good from a performance standpoint. There are simple workarounds like provisioned concurrency but it is a clear sign there is a flaw in the architecture.
Classic web frameworks are not appropriate for building a truly serverless application. It’s the wrong tool for the architecture.
The Anti-Framework
Assuming you are fully bought in to AWS and have embraced the lock-in lifestyle, life is great. AWS acts like a framework of its own providing all of the facilities one needs for a web application but in the form of web services of the Amazonian variety. If we’re talking about RESTful web services, it’s possible to put together an extremely scalable, maintainable, and highly available application.
No docker, kubernetes, or load balancers to worry about. You can even skip the VPC if you use the Aurora Data API to run SQL queries.
The above list could go on for a very long time but you get the point. If we want to be as lazy as possible and leverage cloud services as much as possible then what we really want is a tool for composing these services in an expressive and familiar fashion. Amazon’s new Cloud Development Kit (CDK) is just the tool for that. If you’ve never heard of CDK you can read a friendly introduction here or check out the official docs.
In short CDK lets you write high-level code in Python, TypeScript, Java or .NET, and compile it to a CloudFormation template that describes your infrastructure. A brief TypeScript example from cursed-webring:
// API Gateway with CORS enabled
const api = new RestApi(this, "cursed-api", {
restApiName: "Cursed Service",
defaultCorsPreflightOptions: {
allowOrigins: apigateway.Cors.ALL_ORIGINS,
},
deployOptions: { tracingEnabled: true },
});
// defines the /sites/ resource in our API
const sitesResource = api.root.addResource("sites");
// get all sites handler, GET /sites/
const getAllSitesHandler = new NodejsFunction(
this,
"GetCursedSitesHandler",
{
entry: "resources/cursedSites.ts",
handler: "getAllHandler",
tracing: Tracing.ACTIVE,
}
);
sitesResource.addMethod("GET", new LambdaIntegration(getAllSitesHandler));
Is CDK a framework? It depends how you define “framework” but I consider more to be infrastructure as code. By allowing you to effortlessly wire up the services you want in your application, CDK more accurately removes the need for any sort of traditional web framework when it comes to features like routing or responding to HTTP requests.
While CDK provides a great way to glue AWS services together it has little to say when it comes to your application code itself. I believe we can sink even lower into the proverbial couch by decorating our application code with metadata that generates the CDK resources our application declares, specifically Lambda functions and API Gateway routes. I call it an anti-framework.
@JetKit/CDK
To put this into action we’ve created an anti-framework called @jetkit/cdk, a TypeScript library that lets you decorate functions and classes as if you were using a traditional web framework, with AWS resources automatically generated from application code.
The concept is straightforward. You write functions as usual, then add metadata with AWS-specific integration details such as Lambda configuration or API routes:
import { HttpMethod } from "@aws-cdk/aws-apigatewayv2"
import { Lambda, ApiEvent } from "@jetkit/cdk"
// a simple standalone function with a route attached
export async function aliveHandler(event: ApiEvent) {
return "i'm alive"
}
// define route and lambda properties
Lambda({
path: "/alive",
methods: [HttpMethod.GET],
memorySize: 128,
})(aliveHandler)
If you want a Lambda function to be responsible for related functionality you can build a function with multiple routes and handlers using a class-based view. Here is an example:
import { HttpMethod } from "@aws-cdk/aws-apigatewayv2"
import { badRequest, methodNotAllowed } from "@jdpnielsen/http-error"
import { ApiView, SubRoute, ApiEvent, ApiResponse, ApiViewBase, apiViewHandler } from "@jetkit/cdk"
@ApiView({
path: "/album",
memorySize: 512,
environment: {
LOG_LEVEL: "DEBUG",
},
bundling: { minify: true, metafile: true, sourceMap: true },
})
export class AlbumApi extends ApiViewBase {
// define POST handler
post = async () => "Created new album"
// custom endpoint in the view
// routes to the ApiViewBase function
@SubRoute({
path: "/{albumId}/like", // will be /album/123/like
methods: [HttpMethod.POST, HttpMethod.DELETE],
})
async like(event: ApiEvent): ApiResponse {
const albumId = event.pathParameters?.albumId
if (!albumId) throw badRequest("albumId is required in path")
const method = event.requestContext.http.method
// POST - mark album as liked
if (method == HttpMethod.POST) return `Liked album ${albumId}`
// DELETE - unmark album as liked
else if (method == HttpMethod.DELETE) return `Unliked album ${albumId}`
// should never be reached
else return methodNotAllowed()
}
}
export const handler = apiViewHandler(__filename, AlbumApi)
The decorators aren’t magical; they simply save your configuration as metadata on the class. It does the same thing as the Lambda() function above. This metadata is later read when the corresponding CDK constructs are generated for you. ApiViewBase contains some basic functionality for dispatching to the appropriate method inside the class based on the incoming HTTP request.
Isn’t this “routing?” Sort of. The AlbumApi class is a single Lambda function for the purposes of organizing your code and keeping the number of resources in your CloudFormation stack at a more reasonable size. It does however create multiple API Gateway routes, so API Gateway is still handling the primary HTTP parsing and routing. If you are a purist you can of course create a single Lambda function per route with the Lambda() wrapper if you desire. The goal here is simplicity.
The reason Lambda() is not a decorator is that function decorators do not currently exist in TypeScript due to complications arising from function hoisting.
Why TypeScript?
As an aside, TypeScript is now my preferred choice for backend development. JavaScript no, but TypeScript yes. The rapid evolution and improvements in the language with Microsoft behind it have been impressive. The language is as strict as you want it to be. Having one set of tooling, CI/CD pipelines, docs, libraries and language experience in your team is much easier than supporting two. All the frontends we work with are React and TypeScript, why not use the same linters, type checking, commit hooks, package repository, formatting configuration, and build tools instead of maintaining say, one set for a Python backend and another for a TypeScript frontend?
Python is totally fine except for its lack of type safety. Do not even attempt to blog at me ✋🏻 about mypy or pylance. It is like saying a Taco Bell is basically a real taqueria. Might get you through the day but it’s not really the same thing 🌮
Construct Generation
So we’ve seen the decorated application code, how does it get turned into cloud resources? With the ResourceGeneratorConstruct, a CDK construct that takes your functions and classes as input and generates AWS resources as output.
import { CorsHttpMethod, HttpApi } from "@aws-cdk/aws-apigatewayv2"
import { Construct, Duration, Stack, StackProps, App } from "@aws-cdk/core"
import { ResourceGeneratorConstruct } from "@jetkit/cdk"
import { aliveHandler, AlbumApi } from "../backend/src" // your app code
export class InfraStack extends Stack {
constructor(scope: App, id: string, props?: StackProps) {
super(scope, id, props)
// create API Gateway
const httpApi = new HttpApi(this, "Api", {
corsPreflight: {
allowHeaders: ["Authorization"],
allowMethods: [CorsHttpMethod.ANY],
allowOrigins: ["*"],
maxAge: Duration.days(10),
},
})
// transmute your app code into infrastructure
new ResourceGeneratorConstruct(this, "Generator", {
resources: [AlbumApi, aliveHandler], // supply your API views and functions here
httpApi,
})
}
}
It is necessary to explicitly pass the functions and classes you want resources for to the generator because otherwise esbuild will optimize them out of existence.
If you want to try it out as fast as humanly possible you can clone the TypeScript project template to get a modern serverless monorepo using NPM v7 workspaces.
If you want to build a cloud-native web service, consider reaching for the AWS Cloud Development Kit. CDK is a new generation of infrastructure-as-code (IaC) tools designed to make packaging your code and infrastructure together as seamless and powerful as possible. It’s great for any application running on AWS, and it’s especially well-suited to serverless applications.
The CDK consists of a set of libraries containing resource definitions and higher-level constructs, and a command line interface (CLI) that synthesizes CloudFormation from your resource definitions and manages deployments. You can imperatively define your cloud resources like Lambda functions, S3 buckets, APIs, DNS records, alerts, DynamoDB tables, and everything else in AWS using TypeScript, Python, .NET, or Java. You can then connect these resources together and into more abstract groupings of resources and finally into stacks. Typically one entire service would be one stack.
CDK doesn’t exactly replace CloudFormation because it generates CloudFormation markup from your resource and stack definitions. But it does mean that if you use CDK you don’t really ever have to manually write CloudFormation ever again. CloudFormation is a declarative language, which makes it challenging and cumbersome to do simple things like conditionals, for example changing a parameter value or not including a resource when your app is being deployed to production. When using a typed language you get the benefit of writing IaC with type checking and code completion, and the ability to connect resources together with a very natural syntax. One of the real time-saving benefits of CDK is that you can group logical collections of resources into reusable classes, defining higher level constructs like CloudWatch canary scripts, NodeJS functions, S3-based websites with CloudFront, and your own custom constructs of whatever you find yourself using repeatedly.
The CLI for CDK gives you a set of tools mostly useful for deploying your application. A simple cdk deploy parses your stacks and resources, synthesizes CloudFormation, and deploys it to AWS. The CLI is basic and relatively new, so don’t expect a ton of mature features just yet. I am still using the Serverless framework for serious applications because it has a wealth of built-in functionality and useful plugins for things like testing applications locally and tailing CloudWatch logs. AWS’s Serverless Application Model (SAM) is sort of equivalent to Serverless, but feels very Amazon-y and more like a proof-of-concept than a tool with any user empathy. The names of all of these tools are somewhat uninspired and can understandably cause confusion, so don’t feel bad if you feel a little lost.
Sample CDK Application
I built a small web service to put the CDK through its paces. My application has a React frontend that fetches a list of really shitty websites from a Lambda function and saves them in the browser’s IndexedDB, a sort of browser SQL database. The user can view the different shitty websites with previous and next buttons and submit a suggestion of a terrible site to add to the webring. You can view the entire source here and the finished product at cursed.lol.
The Cursed Webring
To kick off a CDK project, run the init command: cdk init app --language typescript.
This generates an application scaffold we can fill in, beginning with the bin/cdk.ts script if using TypeScript. Here you can optionally configure environments and import your stacks.
#!/usr/bin/env node
import "source-map-support/register";
import * as cdk from "@aws-cdk/core";
import { CursedStack } from "../lib/stack";
const envProd: cdk.Environment = {
account: "1234567890",
region: "eu-west-1",
};
const app = new cdk.App();
new CursedStack(app, "CursedStack", { env: envProd });
The environment config isn’t required; by default your application can be deployed into any region and AWS account, making it easy to share and create development environments. However if you want to pre-define some environments for dev/staging/prod you can do that explicitly here. The documentation suggests using environment variables to select the desired AWS account and region at deploy-time and then writing a small shell script to set those variables when deploying. This is a very flexible and customizable way to manage your deployments, but it lacks the simplicity of Serverless which has a simple command-line option to select which stage you want. CDK is great for customizing to your specific needs, but doesn’t quite have that out-of-the-box user friendliness.
DynamoDB
Let’s take a look at a construct that defines a DynamoDB table for storing user submissions:
import * as core from "@aws-cdk/core";
import * as dynamodb from "@aws-cdk/aws-dynamodb";
export class CursedDB extends core.Construct {
submissionsTable: dynamodb.Table;
constructor(scope: core.Construct, id: string) {
super(scope, id);
this.submissionsTable = new dynamodb.Table(this, "SubmissionsTable", {
partitionKey: {
name: "id",
type: dynamodb.AttributeType.STRING,
},
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
});
}
}
Here we create a table that has a string id primary key. In this example we save the table as a public property (this.submissionsTable) on the instance of our Construct because we will want to reference the table in our Lambda function in order to grant write access and provide the name of the table to the function so that it can write to the table. This concept of using a class property to keep track of resources you want to pass to other constructs isn’t anything particular to CDK – it’s just something I decided to do on my own to make it easy to connect different pieces of my service together.
Lambda Functions
Here I declare a construct which defines two Lambda functions. One function fetches a list of websites for the user to browse, and the other handles posting submissions which saved into our DynamoDB submissionsTable as well as Slacked to me. I am extremely lazy and manage most of my applications this way. We use the convenient NodejsFunction high-level construct to make our lives easier. This is the most complex construct of our stack. It:
Loads a secret containing our Slack webhook URL
Defines a custom property submissionsTable that it expects to receive
Defines an API Gateway with CORS enabled
Creates an API resource (/sites/) to hold our function endpoints
Defines two Lambda NodeJS functions (note that our source files are TypeScript – compilation happens automatically)
Connects the Lambda functions to the API resource as GET and POST endpoints
Grants write access to the submissionsTable to the submitSiteHandler function
import * as core from "@aws-cdk/core";
import * as apigateway from "@aws-cdk/aws-apigateway";
import * as sm from "@aws-cdk/aws-secretsmanager";
import { NodejsFunction } from "@aws-cdk/aws-lambda-nodejs";
import { LambdaIntegration, RestApi } from "@aws-cdk/aws-apigateway";
import { Table } from "@aws-cdk/aws-dynamodb";
// ARN of a secret containing the slack webhook URL
const slackWebhookSecret =
"arn:aws:secretsmanager:eu-west-1:178183757879:secret:cursed/slack_webhook_url-MwQ0dY";
// required properties to instantiate our construct
// here we pass in a reference to our DynamoDB table
interface CursedSitesServiceProps {
submissionsTable: Table;
}
export class CursedSitesService extends core.Construct {
constructor(
scope: core.Construct,
id: string,
props: CursedSitesServiceProps
) {
super(scope, id);
// load our webhook secret at deploy-time
const secret = sm.Secret.fromSecretCompleteArn(
this,
"SlackWebhookSecret",
slackWebhookSecret
);
// our API Gateway with CORS enabled
const api = new RestApi(this, "cursed-api", {
restApiName: "Cursed Service",
defaultCorsPreflightOptions: {
allowOrigins: apigateway.Cors.ALL_ORIGINS,
},
});
// defines the /sites/ resource in our API
const sitesResource = api.root.addResource("sites");
// get all sites handler, GET /sites/
const getAllSitesHandler = new NodejsFunction(
this,
"GetCursedSitesHandler",
{
entry: "resources/cursedSites.ts",
handler: "getAllHandler",
}
);
sitesResource.addMethod("GET", new LambdaIntegration(getAllSitesHandler));
// submit, POST /sites/
const submitSiteHandler = new NodejsFunction(
this,
"SubmitCursedSiteHandler",
{
entry: "resources/cursedSites.ts",
handler: "submitHandler",
environment: {
// let our function access the webhook and dynamoDB table
SLACK_WEBHOOK_URL: secret.secretValue.toString(),
CURSED_SITE_SUBMISSIONS_TABLE_NAME: props.submissionsTable.tableName,
},
}
);
// allow submit function to write to our dynamoDB table
props.submissionsTable.grantWriteData(submitSiteHandler);
sitesResource.addMethod("POST", new LambdaIntegration(submitSiteHandler));
}
}
While there’s a lot going on here it is very readable if taken line-by-line. I think this showcases some of the real expressibility of CDK. That props.submissionsTable.grantWriteData(submitSiteHandler) stanza is really 👨🏻🍳👌🏻. It grants that one function permission to write to the DynamoDB table that we defined in our first construct. We didn’t have to write any IAM policy statements, reference CloudFormation resources, or even look up exactly which actions this statement needs to consists of. This gives you a bit of the flavor of CDK’s simplicity compared to writing CloudFormation by hand.
If you’d like to look at the source code of these Lambdas you can find it here. Fetching the list of sites is accomplished by loading a Google Sheet as a CSV (did I mention I’m really lazy?) and the submission handler does a simple DynamoDB Putcall and hits the Slack webhook with the submission. I love this kind of web service setup because once it’s deployed it runs forever and I never have to worry about managing it again, and it costs roughly $0 per month. If a website is submitted I can evaluate it and decide if it’s shitty enough to be included, and if so I can just add it to the Google Sheet. And I have a record of all submissions in case I forget or one gets lost in Slack or something.
CloudFront CDN
Let’s take a look at one last construct I put together for this application, a CloudFront CDN distribution in front of a S3 static website bucket. I realized the need to mirror many of these lame websites because due to their inherent crappiness they were slow, didn’t support HTTPS (needed when iFraming), and might not stay up forever. A little curl --mirror magic fixed that right up.
It’s important to preserve these treasures
Typically defining a CloudFront distribution with HTTPS support is a bit of a headache. Again the high-level constructs you get included with CDK really shine here and I made use of the CloudFrontWebDistribution construct to define just what I needed:
This creates a HTTPS-enabled CDN in front of my existing S3 bucket with static website hosting. I could have created the bucket with CDK as well but, since there can only be one bucket with this particular domain that seemed a bit overkill. If I wanted to make this more reusable these values could be stack parameters.
The Stack
Finally the top-level Stack contains all of our constructs. Here you can see how we pass the DynamoDB table provided by the CursedDB construct to the CursedSitesService containing our Lambdas.
import * as cdk from "@aws-cdk/core";
import { CursedMirror } from "./cursedMirror";
import { CursedSitesService } from "./cursedSitesService";
import { CursedDB } from "./db";
export class CursedStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const db = new CursedDB(this, "CursedDB");
new CursedSitesService(this, "CursedSiteServices", {
submissionsTable: db.submissionsTable,
});
new CursedMirror(this, "CursedSiteMirrorCDN");
}
}
Putting it all together, all that’s left to do is run cdk deploy to summon our cloud resources into existence and write our frontend.
Security Warnings
It’s great that CDK asks for confirmation before opening up ports:
Is This Better?
Going through this exercize of creating a real service using nothing but CDK was a great way for me to get more comfortable with the tools and concepts behind it. Once I wrapped my head around the way the constructs fit together and started discovering all of the high-level constructs already provided by the libraries I really started to dig it. Need to load some secrets? Need to define Lambda functions integrated to API Gateway? Need a CloudFront S3 bucket website distribution? Need CloudWatch canaries? It’s already there and ready to go along with strict compile-time checking of your syntax and properties. I pretty much never encountered a situation where my code compiled but the deployment was invalid, a vastly improved state of affairs from trying to write CloudFormation manually.
And what about Terraform? In my humble opinion if you’re going to build cloud-native software it’s a waste of effort to abstract out your cloud provider and their resources. Better to embrace the tooling and particulars of one provider and specialize instead of pursuing some idealistic cloud-agnostic setup at a great price of efficiency. Multi-cloud is the worst practice.
The one thing that I missed most from the Serverless framework was tailing my CloudWatch logs. When I had issues in my Lambda logic (not something the CDK can fix for you) I had to go into the CloudWatch console to look at the logs instead of simply being able to tail them from the command line. The upshot though is that CDK is simply code, and writing your own tooling around it using the AWS API should be straightforward enough. I expect SAM and the CDK CLI to only get more mature and user-friendly over time, so I imagine I’ll be building projects of increasing seriousness with them as time progresses.
If you want to learn more, start with the CDK docs. And if you know of any cursed websites please feel free to mash that submit button.
It was in the last great recession that I started doing contract software development, about 2008-2010. The bubble didn’t burst with as much force and shrapnel as in 2000, but it had a distinct dampening of the animal spirits of SOMA, San Francisco where all the startups lived.
It was somewhat by design. The previous job I had, writing Java and ActionScript for a marketing research company was chill enough but felt aimless. I had so little motivation I ended up coasting for a few months and playing a lot of pingpong. I wanted some new challenges, so decided to go freelance.
The first gig I got was off of Craigslist, although I imagine these days Upwork would be a better place to look for work. The project was very limited in scope; mostly adding a Google Maps visualization on top of some grant data for a nonprofit. It was a couple weeks of work, a couple grand, and time to move on.
Some time later a former colleague of mine hired me for some development work at a SF startup doing IP telephony combined with podcasting. This was 2009 so nobody had ever heard of podcasts and smartphones were still a fresh new technology people weren’t entirely sure what to do with. The gig was a fun challenge – enabling people to listen to and produce podcasts using only a telephone, no apps involved. I ended up developing some pretty innovative software that resembled a web framework but for touch tone and interactive voice response (IVR) phone applications. We successfully moved the application from an expensive managed solution to our own in-house platform that I designed, significantly cutting down operating costs.
After some time working on that project, I met a couple of guys who wanted to build a SEO-optimized directory of medical professionals, starting with Spanish-speaking plastic surgeons. I said I could cobble together a search engine in my spare time and was engaged. Some days I would walk the two blocks over from the small telephony company office to the small office housing the nascent doctor directory business and show my progress.
It seemed clear at the time that the podcasting telephony company while highly experimental, was professionally run. It had not one but two Stanford business school co-CEOs running it, with what amounted to a successful track record in the form of an early dot-com electronic greeting card company. Remember those? Weird shit. There were respectable investors, a small team of smart and highly competent professionals, and a beautiful office on Howard St. I recall situated on the floor below our office sat a little room consisting of no furniture save a well-stocked bar with never any people in sight, and a sign on the door reading “GitHub.” I felt like great things could happen.
In contrast, my side gig seemed like some small-time SEO hustling along with a smaller team and paycheck and no Stanford business vibes or major VC funding. I didn’t see anything wrong with that, and still tried to do a professional job, but it seemed like more of a dead end compared to the “real” engineering I was doing, fighting battles with touchy open-source PBX software and voice recognition grammars.
Then things turned out completely differently from what I expected. The doctor directory project kept growing, expanding, and taking on a life of its own. We got a proper office at 1st and Mission, hired an engineer, a designer, a salesperson. Plastic surgeons were mostly ditched, and now we were primarily helping American dentists establish a presence on this new “world wide web” technology they couldn’t quite wrap their heads around. This contacting gig that I imagined would consist of a couple months of basic work kept growing and there was always more work to do. Without any planning or expectations it turned into a real company, with eventually a staff of 25 talented and terrific people and an extremely respectable office in the Financial District. We built software to help all kinds of small medical practices in the US manage their patient communication, from appointment reminders to e-visits to actually useful online medical Q&A. Six years later we sold the company to a large practice management software firm in Irvine, CA.
And practiced our swordsmanship.
Following that experience my business partner John and I started a new company together again, this time on purpose. We started hiring and training some of the best young engineers, taking on projects and filling outstaffing needs for our clients, staffing a couple offices in Eastern Europe until covid forced us to go purely remote. This has in effect scaled up from my original single-person consulting operation into a powerhouse team of crack young engineers ready to take on complex software projects.
The classic Silicon Valley VC-backed, Stanford-connected, hip startup went nowhere, and closed its doors. I got permission to open-source the IVR framework we built but little else came of it. As it happened, consulting across different clients helped me to gain a broader picture of what was actually possible and break my preconceived notions founded on image instead of substance. I accidentally ended up starting a company which went on to help make American health care just a little tiny bit less terrible, created a couple dozen jobs, and had the profound and unique experience of building up the software for a company starting completely from nothing up through due diligence and acquisition. Along the way I learned some lessons about software contracting I want to share with others who may be considering going rōnin and setting out on their own as freelancers.
Tradeoffs and Considerations
It’s my nature to think of everything in terms of trade-offs. Maybe because I have engineer-brain, or because I’m a libra, who knows. There are real benefits to consulting as opposed to being a full-time employee, but also some downsides.
Legal Concerns
In America at least, contracting means forming your own business, doing 1099 tax forms and racking up deductions, and drafting and reviewing contracts. It’s more effort and responsibility than being a full-time employee somewhere, as you’re now responsible for taxes and legal matters. Even if you’re not in America, being able to work across borders means creating a legal business entity.
I started by initially getting a DBA, or “Doing Business As” name, under which I could legally create contracts and other paperwork using an official-looking business name. Later on I “upgraded” to a California S-Corporation, which gives favorable tax treatment once your income reaches a certain threshold along with some legal liability protection. If your corporation is sued, all that can be collected usually is what the corporation has, shielding you personally to some degree. A Cali S-Corp will run you $800 a year not counting the time spent on paperwork and taxes you or your accountant/tax attorney will be doing.
Even getting a DBA or Fictitious Business Name is far from simple. In Contra Costa County for example:
Within 30 days after a fictitious business name statement has been filed, the registrant shall cause it to be published in a newspaper of general circulation in the county where the fictitious business name statement was filed or, if there is no such newspaper in that county, in a newspaper of general circulation in an adjoining county. If the registrant does not have a place of business in this state, the notice shall be published in a newspaper of general circulation in Sacramento County. The publication must be once a week for four successive weeks and an affidavit of publication must be filed with the county clerk where the fictitious business name statement was filed within 30 days after the completion of the publication.
You also need to take out an ad in a local paper announcing the new business and inform your county.
You don’t need to do this yourself necessarily, services like BusinessRocket can take care of business formation and taxes for a small fee.
One of the first things you’ll need to do is ask your friends for legal services recommendations. If you know anyone who is a contractor or a small business owner they probably have lawyers they work with and can recommend. Or you can hit me up. I enlisted the services of a business attorney to help me draft and review contracts, and a tax attorney to take care of the corporation taxes and paperwork. Obviously lawyers are not cheap and in theory you can do all of this yourself, but they can also save you a great time of time and money as well by warning you about common pitfalls and fuckups, and suggest ways to better protect yourself or take advantage of favorable tax laws.
When you are going to agree to do work for a client they will want to know your hourly or project rate and a contract to sign. Understand that around a third to a half of your hourly rate is going to go to taxes, so adjust accordingly. A contract will need to have some important pieces of information. I highly suggest not listening to me and listening to an Actual Lawyer in your country or state about drafting a contract, but as far as software development you will typically need to include a “schedule of work.” This is the scope of what you will be expected to deliver in order to get paid.
Come at the schedule of work with a PM mindset – you have to figure out what the client actually wants and what actually needs to get built. If you end up needing to do more work outside of this scope then it can be a headache, and having a contract which spells out what you’ve been asked to do can make an effective backstop against scope creep. As a related matter, I highly suggest doing per-hour billing rather than doing projects for a fixed price whenever possible, because as we all know the best laid o’ work statementoften go awry.
Technology
Sometimes you have a particular skill or framework or vertical or other some such specialization, and you can look for relevant gigs. When I started out long ago mine was Perl, and I didn’t have a hard time at all finding work. The first nonprofit project I worked on was already using my favorite Perl web framework. Some jobs will leave it entirely up to you to create something from nothing, and you can have your choice of technology for solving the problem. Sometimes you will go work for a company with their own existing codebase that wants to expand it, fix existing problems, or throw it away and rewrite from scratch.
For me this is one of the most thrilling parts of contracting; getting to see how different companies operate. I’ve gotten an opportunity to look at a good amount of different codebases and operational setups. It gives you a broader view of the landscape, allowing you to borrow best practices others have landed on, and learn from the mistakes of others. I’ve gotten to encounter a lot of technologies I would not have otherwise run into, like seeing different implementations of microservices, getting really familiar with SNMP, and deconstructing a J2EE application. When you work for one company or for yourself for a long time, it can be hard to stay current or get experience with other technologies. When working with different companies, you can rapidly take in and observe various stacks that organizations have coalesced around, usually ending up with good practices. There’s an infinite combination of frameworks, languages, architectures, libraries, development environments, and security practices and the state of the art is always in flux. Having exposure to new assemblies of technology keeps you curious and informed and better able to make decisions for your clients and projects.
Most of our business consists of either building new projects for people, taking over existing projects, or joining existing teams. We get to experience not just code of course, but see how different organizations are run, different business models, all kinds of personalities and team dynamics. It can potentially open you up to a richer tapestry of experiences and cultures than working on the same product and team for years and years will.
The technology you encounter will vary wildly, from hipster web microframeworks to ancient enterprise Java. Being flexible and able to rapidly adapt and figure out the basics of a lot of different technologies is a very valuable skill, as is learning how to start up every kind of bespoke development environment out there. I sort of have a weird perverted dream of going around rewriting ancient COBOL applications for desperate businesses to run on modern serverless cloud-first architecture.
You never know where freelancing will take you.
Clients
The coolest thing about being a contractor is that you can be your own boss. You set your own schedule, work from where you want, and don’t technically have to wear pants a lot of the time. Maybe this is less of a big deal than it used to be thanks to the ‘rona but flexibility is definitely something I value a lot.
Of course in the end, everyone has a boss, you can’t escape from it. The CEO has to answer to the board and investors, the investors have to answer to their partners, the partners have to answer to funds, and so on. Your boss is now the client, since they’re the one cutting your checks now.
In my experience this has been a good thing. I can report honestly that I’ve enjoyed working with 100% of my past and present clients and things have on the whole gone very smoothly. Much of it comes down to choosing your clients. You will turn some people down because they are looking for someone with different skills, don’t pay enough, have a Million Dollar App Idea I Just Need Someone To Build It, are unprofessional, or just not cut out for the whole business thing. Just don’t work for these people. It’s okay to turn down work. Always act professionally though, no matter what situation you find yourself in. Your reputation absolutely follows you around, and taking pride in your work and professionalism is a requirement for being a freelancer.
Another skill that you must be consciously aware of and always seeking to improve is communication. Being direct, open, and transparent with your clients can often mean the difference between a successful project and one that ends up in a mess of assumptions and bad feelings. Underpromise and overdeliver is the mantra. Over-communicate, bring any concerns about the project to the fore, have regularly scheduled progress meetings when applicable, and do demos for your client. Focus on delivering something visible, something the client can look at and play with, so you can get feedback early. There will almost always be some gray area between what your client has in their mind and what you envision in your own, not just in designs but in all the details in the details.
Sometimes a client may come to you with detailed designs and specifications, but I’ve never been in a situation where all the information needed to deliver a project was hashed out up front. Most of the time very little of it is. You need to establish a good two-way street of communication and always be asking for feedback and clarification of ambiguities. Get a MVP in their hands as early as possible and iterate on it.
Other Skills
Consider also what skills you can bring besides just writing code. Familiarity and expertise in UX and design is very valuable and basically a requirement for most software jobs these days. We’ve had clients come to us to help them perform due diligence on codebases of companies that they are considering acquiring. We’ve performed security audits on codebases, sometimes unbidden. And thanks to our extensive commercial experience building successful companies we also can provide valuable consulting services on marketing, sales and raising capital.
Whatever extra you can bring to the table for your clients be sure to market it and build up your experience and knowledge in that area. It can make the difference between being just another coder and a valuable partner to your client.
Working on three laptops at once can be a valuable skill.
Getting Started
For many contractor-curious folks getting started may be scary or daunting.
If you’re a full-time employee, becoming a contractor means giving up some job security. You may not have a guaranteed paycheck for a while, if ever. Being a contractor, especially if you’re starting out and doing it alone, brings many uncertainties. However working as a full-time employee carries its own sort of risks. Job security isn’t what it used to be, spending time on bureaucracy and pleasing your superiors may not make you fulfilled, and there’s a real limit to how much you can accomplish for yourself as part of a larger organization.
There is comfort in working at a company; all you have to do is show up and be told what to do, you don’t have to think much about taxes or contracts, someone else can make a lot of the decisions about how to run the company. But if you want to take on more responsibility, have the opportunity to grow and learn to run your own business, and do things your way then consider becoming a contractor. Almost everyone I know who works for a company of any size will be happy to tell you all about the mistakes and boneheaded ideas of their superiors. Everyone has ideas of how the company they work for could be run better. I say if you really believe this then work for yourself.
So how to begin? You have two options: line up to your first gig, or join a company doing freelance work. You can put the word out to your network that you are available for work, or you can go look for jobs posted on sites like Upwork. There are also companies that specialize in doing contract work and are often looking for contractors to augment their pool of developers. If you go this route know that such companies may not always have work immediately lined up for you, but they may be happy to interview you and keep you on file in case some work comes up that matches your qualifications.
My suggestion is to do both; look at what’s out there, get a feel for what people are looking for and how much they’re offering and also check out companies that are doing the kind of contracting work you’re interested in and offer your services to them. There certainly is no shortage of work and job opportunities out there for contractors, it’s more a matter of finding a good fit for you that will be engaging work and well-compensated. Even if you start with some small simple jobs, they can definitely lead to greater opportunities as you gain more confidence, experience, references, and a better understanding of the market.
I would of course be remiss as a small business owner if I did not mention that our consulting company JetBridge is always looking for smart and talented engineers. If you’re thinking of becoming a contractor, feel free to drop me a line, and I might be able to help you get started or refer you to other work out there. I know it can be an intimidating career jump, but it can be extremely rewarding and full of new opportunities as well. And if the current tech bubble happens to pop again someday, it just might be a great time to try something new.