Farisa’s Courage publication update

I’ve said, a couple of times, that I’d like to have Farisa’s Courage out by October 1, 2017. I feel compelled to backtrack on that. I spoke too soon, and I’m not sure. It depends on the first agonizing decision that a writer makes: do I self-publish, or do I work with a traditional publisher?

If I self-publish, I can set my own deadlines and I can easily have a polished book out by October, even with a job change and cross-country move possible this summer. It’s not ready now, but it will be by then. If I use a traditional publisher, it would take a minor miracle to achieve October 2018. Publishing, even if you’re submitting a polished manuscript that doesn’t take a lot of work on their side, doesn’t move fast. I’ve been studying the publishing industry for a few days and poring through contradictory information, but that’s a recurring theme: it’s slow.

I’m still figuring out how publishing works. It’s massively complicated, and it’s terrifying. At every turn, you’re trusting someone else with your reputation. You can self-publish and avoid massive headaches around agents and contracts and various problems of incentive, but then you’re trusting yourself to manage your reputation, as opposed to a professional who knows what she’s actually doing. Given that half of Silicon Valley believes that I’m a unionist, I do not believe that I possess supreme confidence on the matter of managing a reputation.

Before I get into publishing, let me talk about the writing process, because it’s interesting. It seems to have a T-shaped time profile. The original ideas, in some cases, I’ve had for ten years. I’ve been designing Farisa’s character for four years. Some of the scenes, I wrote two years ago. That’s the long, upper-left branch of the “T”. However, I didn’t have a novel for quite a while. I had no idea how to start the first book in the series, much less how to end it. Then, about two weeks ago, I figured it out. Everything clicked, so there was a two-week spell with a lot of writing. I had a few “limit breaks” (10000+ word days). Although this writing required (and will continue to require) polish, it didn’t require more than the normal amount. That’s the upright staff of the “T”. Then there’s revision and editing. I’m guessing that it’ll take three to five months of work, about one to two hours per day. Revision and editing have to be done at a sustainable pace. You’re no longer playing a movie in your head and writing down what you see. Instead, you’re trying to critique the narrative as bluntly as you would if a stranger had written it, and then replace any weak parts (there are few of them, but the target is zero) with professional-quality writing. That’s hard to do in a 22-hour “writer 7-to-5” binge.

I’ve already surpassed the level of quality that is “publishable”. (That doesn’t mean that I won’t get rejected. Everyone gets rejection letters. It’s a rite of passage.)  However, my goal isn’t to be “publishable”. I want to go far beyond that, especially since this novel is the first in a series that’ll probably take me a decade or more to finish. It has to be good. That takes time. It takes work. If I succeed with this, I’ll be writing more (and probably better) novels building off of this one. So it’s worth getting right.

The older (and for a long time, only) publication approach is to “trade publish” with a publishing house (“legacy” or “traditional” press). You’ll almost certainly need an agent to play, and agents are virtually impossible to get for first-time writers. Worse, if you get a bad agent (and how would you know?) you might land with the wrong publisher. If you get the wrong publisher, they might do very little promotion. If you sell poorly, it doesn’t matter if it’s your publisher’s fault because he failed to get you reviews, marketing, and exposure. You’re the person that it sticks too. No one in the publishing world differentiates between a bad book and a good book that didn’t sell for reasons having nothing to do with the writing. In fact, you’re often judged as a success or failure based on performance in the first eight weeks. If you’re underperforming in the second month, your publisher doesn’t call up Oprah and get you on the show to drive sales. It’s more likely that he forgets that you ever existed.

What’s a good publisher? I don’t believe that there’s a ranking for it, and it certainly depends both on the writer and the work. I would say that a good publisher is one who believes in you and will leverage every resource and personal relationship he has in order to make you succeed. A good publisher will get you reviewed in major magazines before your book comes on the market. If you’re writing topical non-fiction, a good publisher will get you on The Daily Show. A good publisher will tell bookstores that your book must be placed in a prominent location and make “the next call is from my boss to your boss” calls if someone doesn’t comply. If you’re into TV deals, a good publisher will have find someone in his network whose teenage son interns at HBO, and have copies land in the way of decision makers. A bad publisher thinks his job ends when he sends you a check for the advance. You might still succeed, but it’s on you to make this happen. (You’re effectively self-publishing, but with higher expectations.) Note that all of this is project-specific. If you get a prestigious publisher who doesn’t believe in your work and who doesn’t do anything to market it, then you got a bad publisher– even if he’d be a great publisher for someone else or a different book. If you choose a smaller publisher who can get your book read by the people who’ll love it enough to start a word-of-mouth phenomenon, that’s a good publisher. The game isn’t about “getting in to” the most prestigious publishing house, but about finding individuals who will do absolutely anything to make your work succeed. You don’t just want to be another line in a spreadsheet. You want someone who will pick up a phone at 7:30pm to get reviews and intangibles and placement in order.

When you self-publish, you’re your own marketer and business operator. You’ll still need cover art. You’ll need copy editing, no matter how good you think you are. That’s expensive. You sell e-books and you set the price. $3.99? $7.99? There’s no easy number. There’s no floor on the number of copies that you sell. You might sell zero copies, whereas if you’d used a reputable publisher, you have a floor at a few thousand. Expectations are probably lower, and the money can be better. You don’t have to worry about losing a writing career if you self-publish and fail to “earn out” your advance, because there is no advance. On the other hand, if you don’t sell anything, you don’t make anything– again, because there’s no advance.

What makes the decision difficult is that you have to choose, on very little information, what publication strategy you want to pursue. You can’t try one, then the other. If you self-publish and get 2500 copies out there, you’ve done very well. Yet a trade publisher will think that those 2500 “came out of” what the book is capable of doing. (I don’t buy this, and I can make a strong mathematical argument against it, but I don’t make the rules.) They’ll be nervous about editing a book that has loyal readers. So, if you self-publish, then trade publication becomes nearly impossible– even if your book is very good and sells well. With a multi-book series, this is an even bigger concern. Self-publishing the first book might lock out trade publication in general for the entire series. Current trends suggest that self-publishing will grow in prestige and effectiveness, but to self-publish a series is to bet on this.

Trade publishing also has more prestige. Most published books make very little money, but it’s a great line on a résumé in most careers. Self-publishing doesn’t have that cachet, because anyone can do it. That doesn’t mean that it isn’t worth doing. There are great self-published books out there. In fact, some people make more money self-publishing than they ever would if they used traditional publishers, because the royalties are more favorable. It’s just harder to convince people to read a $4.99 e-book than to buy a block of paper and dye for $17.99. The publisher gets the first couple thousand readers, but the self-publisher has to go out and get every single one.

Oddly enough, trade publishing seems to have the advantage at the very bottom and very top of the spectrum of what’s publishable. The worst books (“0-4”) can’t be traditionally published at all, unless written by celebrities like a certain corpulent exhibitionist who copped a $3-million advance. (I don’t think that she wrote a 4. Judging from her TV show, I think she’s capable of more. I’m saying that she could write a 4 and it would still sell.) The bottom-tier publishable books (“5-6”) benefit greatly from having a trade publisher. The writer gets to learn from a competent editor (and might hit 7+ later on, as his skills improve) and the accolade of being traditionally published has some value. The middle-tier publishable books (“7-8”) might do better with self-publishing. Why? I’ll explain that, below. The top-tier publishable books (“9-10”) could probably be fine either way, but trade publishing wins again. Why? Well, they get different treatment. If you convince a publisher that your book is 9-10, they’ll move hell and earth to make you a best-seller. If you wrote nonfiction, you’ll get invited to talk at TED– the main one, not TEDx. You’ll have the best team working on the title and every editor will be using every personal relationship and resource to make your title succeed. You’ll be sick of having to fly out for TV spots, but those will drive sales. You’ll be reviewed everywhere. You’ll have a six-figure marketing budget behind your title. If the editor has a teenage son who is an intern at HBO, that son will leave 20 copies around the office and you may get a movie deal.

At 5-6, your book is probably not good enough to start a word-of-mouth epidemic (unless its trashy or campy or broken in a viral-friendly way like 50 Shades) so trade publishing gets you a couple thousand copies and a physical block of paper to use as a trophy. If you can get the 9-10 treatment, it’s really worth it to put up with the delays and (in that case, slight) reduction in creative control that come with traditional publishing. At 7-8, though? I think that self publishing probably wins: more control, more points on the package, better timeframe. At 7-8, you don’t need the validation that a 5-6 would seek, because you know that you’re good; on the other hand, you’re not likely to be getting that special “9-10” treatment that makes it worthwhile to hand over control on pricing, distribution, cover art, title, and various things that publishers do but that will affect your sales and reputation. A first-time writer who hits 8 is probably going to get the same mediocre treatment as one who hits a 5. Yet a self-published 5 isn’t going to move much, and a self-published 8 might do well enough receive interest from traditional publishers (but, sadly, not on that title or series) if the results match the quality of writing.

Why does traditional publishing maintain superior prestige? Well, it’s not a good look to be seen as shooting for 8. People who’d still be considered above-average writers shoot for 10 and hit 5. People who are quite talented shoot for 10 and hit 7. Everyone’s supposed to be shooting for 10. (I don’t imply that self-published writers aren’t shooting for 10. For many, it’s about retaining creative control, which can be a different expression of shooting for 10.) Besides, an 8 is still very good. I would put less than 5% of traditionally published books at the 8 mark and less than 0.5% at 9. Even still, you’re selling from behind if you communicate, “I think I might just be an 8.” Now, I know that that’s not necessarily what self-publishing means. It might mean, “I’d like to get this out more quickly, instead of waiting for three years to see it in a reader’s hands.” It might mean, “I think this has 9 or 10 potential, but I can’t find an agent or publisher who buys in to my vision, so the 9-10 treatment isn’t on offer.” It might mean, “I value creative control more than what a publisher provides.” It might mean, “I think this is a solid 10, but it keeps getting rejected”, and it might actually be a 10– because we all know good authors who’ve had that experience. Unfortunately, perceptions are based on simple models. To ask, “Why is self-publishing less prestigious?”, we can answer it simply. Someone who wrote a 10, and who knew that she wrote a 10, and who knew that everyone would receive it as a 10, would take the traditional route.

Of course, in the real world, these numbers don’t exist and the whole model betrays false assumptions (writers know exactly how good or bad they are, quality of literature can be linearly ordered, sales correlate in a meaningful way to literary quality, the best writers never get rejected– none true). Even still, I think that self-publishing will be behind in prestige until we reach a point where it makes sense even for the 9-10. Undiscovered 9-10’s will still self-publish, but publishers will always treat verified 9-10 very differently from other authors and, in most cases, make it worth their while.

I have to size up, very quickly, if I have a shot (probably, a very small shot) at the “9-10” treatment. If I do, it might be worth it to take the traditional route and find an agent. If that doesn’t seem to be the case, I’d rather self-publish and, if my work is good, let that speak for itself.

What about money? The honest answer is: I don’t personally care, because there’s not much. I am making a career change, because I’m disgusted with corporate software, but I’m not planning to move into professional writing. (I’m remaining technical, but going back into research.) It is very rare to make substantial money with either traditional or self-published fiction. People earning $50,000 from their first novels (an effort that takes more than a year) are outliers. It is also possible to write a “9-10” quality book that doesn’t sell well or that makes no money. Anyway, you tend to get more points on the package (70% vs. 15%) when you self-publish, but if you get the “9-10″/star treatment from a traditional publisher, you might sell so many more copies that it’s worth it to have the smaller percentage.

So, where am I? How do I see it? It’d be vain to guess where I score on that 0-10 spectrum. My 95-percent confidence interval for where my manuscript is, right now, is still 2.5 points wide. I still have a few months of polishing to do. I think (although I may be off the mark) that I have it in me to bring the writing into the 9-10 range with a little bit (meaning a couple hundred hours, so “little bit” is relative) of work. Do I have what it takes to get 9-10 treatment from a publisher? I have no idea. I can’t even begin to assess that. Even get an agent who would know is going to take several months to find.

I will probably try traditional publishing out (it can’t hurt to query agents) and see what it offers me. The truth is, though, that unless I can get the “9-10” star treatment– and let’s be clear, even if what I wrote turns out to be great, the odds are low– I probably won’t use a traditional publisher. I don’t need an advance, and I have no interest in the “standard” trade-publisher package that comes with minimal promotional support. I don’t need to be “published”. I know that I can write. At a time, I had one of the top tech-industry blogs. (It was not an enviable experience, and I stopped for a reason.) If a publisher is not going to give me the “let’s swing for a homer” package, then I’m confident that I can self-publish and be better off. Yet, if I can find a publisher that’ll move hell and earth to help me (a) make the novel as great as it can be, and (b) sell the damn thing like there’s no tomorrow, then I’ll work with them. I’d be insane not to do so.

So, the first strategy is to try for traditional publishing. Maybe I’ll find an agent who thinks I wrote a “9-10”. Maybe she’ll find a publisher who feels the same way. I’m willing to forgo an advance– if you take zero, you automatically “earn out”– but the level of promotional support that I’d expect from a publisher typically comes with a $250,000 advance, which first-timers almost never get. (My view is that I’d rather have the advance money put into promotion and cover art. Let’s all make this thing as great as it can be, and I’ll take only points.) These are impossible odds for an average writer and they’re long odds even for me.

So, I have to be honest about what this means here. Even if what I’ve written is great, I have about a 1% chance of getting an offer that I’ll accept. That’s not a 1% chance of being able to get published. With enough persistence, I think I’m closer to 95 percent for this title, and 100 percent overall. (Of course, everyone gets rejected. A lot. You just keep playing.) That’s a 1% chance of getting published under terms good enough for me to hand over creative control. Meanwhile, it’s probably going to take years to find out if that’s even in the offing. I might strike out at getting an agent (which, these days, is infinitely harder than getting published once you have an agent). I might get 47 rejections. I might get an offer that just doesn’t work. I am, after all, planning to ask for a non-traditional contract (“forget the advance, but throw an extra $250k into promotion, because my reputation is on the line”). I’m not that likely to get it. I have to try, and I have to put months of work behind that try, but I’m a realist.

So, I can’t promise a timeframe. That’s annoying. If I could self-publish and later traditionally publish, after having a track record, that’s absolutely what I would do. I would approach publishers if and only if I could ask for the “9-10” treatment (in which case, trade publishers really do add value). Unfortunately, everything that I’m coming across suggests that you can’t do that. Once you self-publish a book, trade publishers are out. You pick one or the other, before you have a single reader.

The conclusion of this long exploration is that I can’t say for sure when I’ll have this title in a reader’s hand. Not knowing anything about the game, I have to figure things out. I have to try things. Meanwhile, I can’t lose sight of the most important thing, which is making this book the best damn thing I can make it. On that note… back to it.

 

Farisa’s Courage (novel) is Revision Complete

“Revision Complete” means that I won’t be adding characters, changing scenes, or altering storyline in any major way. “Edit Complete” (i.e. fine-tooth comb, zero typo tolerance) will be a couple months from now. I’d like to have something finished and ready for the world by Oct. 1, 2017. We’ll see what I can do.

I’m sending out a finite (fixed but undisclosed) number of copies, even before I shop this out to publishers. It’s an intermediate draft (obviously) and So, to people who’ve read my writing, if you’re interested, please let me know.

Here’s the summary / blurb / trailer for Farisa’s Courage, intended as the first in a series (“The Antipodes”).

The Antipodes

The planet is hot. Civilization thrives close to the poles, but the tropics are uninhabitable. Sea temperatures exceed 50°C (122°F) and violent storms make entering (much less crossing) the tropics impossible. Deserts broil, and the jungles are filled with strange creatures such as skrums, squibbani, ghouls, and dragons. The two hemispheres have been out of contact for thousands of years. There are rumors of a high-altitude path between the two worlds, the Mountain Road. Unfortunately, that path is as dangerous as it is uncertain. Some rumors hold that it doesn’t exist. Others say that it’s protected by a mysterious, ancient magic that sends the unaware to their deaths. There are darker theories involving espionage, ethnic persecution, and the deeply corrupt political and economic environment of the time. Even if the Mountain Road is real, no one knows whether there’s something more dangerous on the other side. The known path passes through cursed cities, dangerous caves, and deserts reaching 80°C (176°F) where the only sustenance is a poisonous cactus. It’s largely believed that what is in the southern hemisphere is even more terrible.

State of the World

Humans have won. Dragons, orcs, and elves still exist, but the human world stands at a population of at least a billion. Technological marvels like steamships and machine guns dominate the world. Trains achieve a blistering pace of twenty to thirty miles per hour. Plank turnpikes supporting carriages connect the cities. However, all is not well. The industrial economy is in decline. Age-old ethnic hatreds are broiling. Cryptic graffiti on city walls suggests danger. Economic inequality and climate change are roiling continents. The lynchpin of the modern world is an organization, originally a detective agency specializing in witch hunts, strike breaking, and bounty hunting known as Alcazar Detectives, now known as the Global Company.

The Global Company

The Global Company in the business of… everything, from alcohol to fossil fuels to railroads to murder. It topples nations, it funds pogroms, and it chooses losers and winners everywhere it can, in order to win at all costs. The Global Company controls seventy percent of the known (northern hemisphere) world economy, and it’s running out of world to conquer. One man, Hampus Bell, in that firm owns more than 45 percent of it, or a third of the world’s wealth. Even as the chief executive, he isn’t safe from internal intrigue, bureaucratic incompetence, and the mysterious syr Konklava that lives within his firm. The Global Company has been studying magic, to limited success, for decades. Yet something is changing. Magic is starting to work, there. No one knows why. Meanwhile, the Company’s mercurial chief executive seems to be increasingly unstable and dangerous. A corporate presentation ends in a grisly murder. The price of Global Company stock (the only stock that matters) fluctuates wildly. Mysterious suicides by high-ranking executives mount.

The Blue Marquessa

Magic is very real. Few deny that it exists, but its practice is discouraged. People with magical talents, or “mages”, suffer from a terrible disease known as “The Sickness” or “The Blue Marquessa”. It causes fatigue, infertility, amnesia, insanity, and death. Every spell has a cost, and numerous dangers come with the practice. According to the Global Company, the vast majority of mages either quit or die within six months. Most dangerous are those who continue, but gradually go insane. This is a world where everything has consequences, and knowledge and virtue (sophya wy fariza, an ancient inscription) are mandatory for a mage’s survival.

Not all, but the most powerful mages can enter minds and control others. If two mages enter one mind, disaster can ensue. Entering the mind of another mage is dangerous. Entering that of an undead means certain death. And being in the mind of a person when that person dies can have unspeakable consequences that continue beyond death.

Farisa La’ewind

Farisa is a smart, good-looking 20-year-old girl “from everywhere and nowhere”, a brown-skinned girl in a snow-white land, a bookish erudite in a world of conflict and anger, an orphan in a mostly friendless and cold world, and a known person in a society where invisibility is the greatest asset. Protected by an ancient, despised ethnicity and a burgeoning resistance movement, she’s an orphan who knows little of her past. Her mother was killed by the Global Company. Her father is believed to be on the Mountain Road. Her three living sisters, like her, live in hiding and rely on espionage and magical assistance to survive.

Farisa’s also one of the most powerful mages in the known world. After a freak accident and, one year later, being accused of a murder that she could not possibly have committed, she has become a symbol in the Global Company’s historic business of witch and bounty hunting. Hampus Bell lacks an interest in her, but can’t prevent his subordinates from chasing the golden trophy among all witches. Presumed missing or dead for a long time, and able to reinvent herself under a different identity, she was once able to attend college and be “a normal girl”. Yet, very recently, all of that went horribly wrong in a way that she must understand, in order to survive… but cannot remember.

26 April ’94

It’s two in the morning. Barefoot and wearing ill-fitted clothing, Farisa is running through woods, then rural by-roads, and then the drunkard- and john-filled, declining industrial city of Exmore. She’s been running for at least ten miles, maybe more. Her memory is rapidly deteriorating, presumably due to an attack of the Blue Marquessa. If she finds a safe spot, she’ll get better. But where? She knows that she needs to reach “House 139”, where her questions might be answered and her journey can begin. As she passes through the dilapidated outskirts of the strange city, she comes to a frightening conclusion based on the offensive, cryptic graffiti. The people of the town already know that she’s there– before she arrived. Mages? Spies? Something dark in her past that drew her there? It isn’t clear. To add to her disadvantage, Farisa has no memory of the four years leading up this point, 2:00 am on April 26. If she wants to survive, she’ll have to recover these memories– and figure out why she lost them.

The two most powerful people in the world– a talented mage with a big heart and a terrible illness, versus a trillionaire who dislikes profanity, harbors a perverted secret, and loves the Global Company more than anything in the world– are drawn into a conflict that neither of them really wants to fight. Hampus must find Farisa in order to prevent unrest in his own company. Farisa must avoid Hampus to survive. Possible fates range into those worse than death, as the Global Company’s engineered pogroms increase and its prison camps proliferate. The stakes get higher and higher with every mile.

Farisa meets Claes Bergryn, a gun-toting steam-era knight in a leather jacket. She meets Mazie, a beautiful resistance fighter with a million secrets. She meets Vikus and Wegen, the key to deciphering a dangerous city’s haunting graffiti. She meets Andor Strong, a college professor considered by the Global Company to be one of its most dangerous adversaries. She meets spies (whom I won’t name, spoiler-duh). She meets orcs and dragons and machine gunners and, perhaps the biggest danger of all, other mages. She finds love and loses it. Her skills develop. She learns, over time, not only who she is but who she was… because her survival depends, though she may not know it, on her ability to remember what happened on April 25– the night before.

There’s also a mysterious trash novel called Jakhob’s Gun, rumored to contain a coded message that might save the world. Unfortunately, no one– not even the brilliant Farisa– can decipher it…

End of blog

“When we reach the top,” Farisa said, “it’ll be worth it.”

Raqel was out of breath. Both girls had been trudging through snow since noon and it was only getting deeper as they climbed.

“You’ve never been up there?”

“Not in winter, Farisa. I’m a city girl.”

The peak was barely a hundred feet above, but each way up looked as treacherous as any other: snow and rock, mostly snow.

“Follow me,” Farisa said. “I know the path.”

“My hands are freezing.”

Reaching the summit exposed them to a fierce northerly wind. In its brief spells of rest, Raqel could hear farm dogs, baying in the shaded valley.

“It’s so cold! The lakes are frozen, the roads covered, the trees all bare–”

“Ay,” Farisa said, “and winter means you see farther. Farther than anyone. Farther than you would have ever known.”

The Disruption Algorithm

I’ve observed that business processes tend to follow three phases.

  • innovation: an opportunity is found or a problem is solved.
  • refinement: improvements are made. The process becomes cheaper, more reliable, and generally better.
  • externalization: there are few identifiable improvements to be made. Value capture can possibly be increased, and opportunities to externalize costs are exploited.

In the refinement stage, there are still plenty of opportunities to reduce costs and improve value capture (or yield) without anyone getting hurt. A process that took six months, when performed the first time, might be reduced to two. Genuine waste is getting cut. It’s not zero-sum. However, returns diminish in the refinement stage as the process improves, and the available gains are made.

At this point, talented and creative people look for new opportunities to innovate. Average or risk-averse people look for other processes to refine, and that’s fine: we need both kinds. Vicious and malignant people, however, start making false refinements that externalize costs and risks. Pollution increases, companies become more brittle, and ethics decline. The world becomes sicker, because in the cops-and-robbers game that exists between cost externalizers and regulators, there are just far more of the former. That’s how one gets corporate processes like employee stack ranking and, in software, the bizarre cult that has grown up around two-week “sprints” (as an excuse that allows management to demand rapid but low-quality work). These appear to yield short-term gains, but externalize costs within the company, diminishing the quality of the work.

Most of the sleazy activities for which business corporations are (justifiably) despised are performed in the externalization phase, when the company starts running out of viable refinements and declines to innovate. It may not be (and probably is not) the case that true improvements cease to be possible, at this point. What seems to happen is that there is a shift in political power between those who seek genuine improvements and those who mercilessly externalize costs. Once this happens, the latter quickly and often intentionally drive out the former. They are often able to do so because genuine improvements to processes usually take time to prove themselves, while cost externalizations can be devised that show profits immediately.

It is, at this point, no longer novel to point out that venture-funded startups have ceased to back genuine technical innovation and have, instead, become a continuation of the process (starting in the 1980s) by which the staid corporations and bilateral loyalty of yesteryear have been replaced by quick gambits, degraded working conditions, and a lack of concern by these new companies for their effects on society. This is what “disruption” often looks like.

The realization that I’ve come to is that Silicon Valley, by which I mean the aggressive and immediate financialization of what purports to be technological innovation, is now deep into the third phase. It still “changes the world”, but rarely in a desirable way, and most often by externalizing costs into society in way that regulators haven’t yet figured out how to handle.

While Silicon Valley is marketed, especially to “the talent”, as an opportunity for people to break free of old rules and take on established interests, it’s actually better thought-of as a massive and opaque, but precisely tuned, genetic algorithm. The population is the space of business processes, up to the scale of whole companies. The mutation element occurs organically, due to inexperience and deflected responsibility– when rules are broken or scandals occur, a 23-year-old “founder” has plausible deniability through ignorance that a 45-year-old financier or executive wouldn’t have. The crossover component is far more destructive: corporate mergers and acquisitions. In this way, new business processes are generated in a way that is deliberately stochastic and, when seemingly cheaper processes are discovered, they’re typically snapped into existing businesses, with losses of jobs and position to occur later.

In the three-phase analysis above, Silicon Valley had its innovation phase in the 1970s and ’80s, when new ways of funding businesses, and looser attitudes toward risk, began to take hold. Business failure in good faith was destigmatized. No doubt, that was a good thing. The refinement phase began in the late 1980s, brought with it the golden age of the technology industry in the 1990s, and ended around 2005. Silicon Valley is now in the externalization phase, exemplified most strongly by Y Combinator, an incubator that openly champions the monetization of reputation, self-indulgent navel-gazing, disregard for experience (a polite way of saying “ageism”), and a loose attitude toward executive ethics. The genetic algorithm of Silicon Valley still operates but, in the Y Combinator era, it no longer produces more efficient business processes but, rather, those that are cheaper and sloppier.

The innovation and refinement phases of Silicon Valley are long gone. Cost externalization– the replacement of trusted corporations with good benefits by a “gig economy” culture of itinerancy, the pushy testing of regulations by founding billion-dollar companies that flout them, and the regression into a 21st-century Gilded Age– has replaced the old Silicon Valley and driven it out. This makes it the responsibility of the next generation’s innovators to come up with something entirely new. It is clear that Paul Graham and Y Combinator, as well as those who’ve subscribed to their culture of flash over substance, will have no place in it.

Card counters

In the world of casino gambling, card counters are legendary. Most “systems” promising to make it possible to beat casinos are ludicrous. However, blackjack, if played very well, can be winnable by the player. For every successful counter, there are probably hundreds who fail at it, because one mistake per hour can annihilate even an optimal player’s edge. Casinos love the legend of the card counter, because it encourages so many people to do it ineptly, and because there’s a lot of money to be made in selling books on the subject. They don’t love actual card counters, though. Those get “burned out”. Casinos share lists of known counters, so it’s pretty typical that a too-skillful player will be banned from all casinos at approximately the same time, and therefore lose this source of income.

There’s danger involved, as is documented in Ben Mezrich’s Bringing Down the House. In Vegas, they’ll just ban you for counting. Shadier outfits in other jurisdictions will do a lot worse. It’s important to note that card counters aren’t, in any sense of the word, cheating. They’re skilled players who’ve mastered the rules of the game and disciplined their minds well enough to keep track of what is happening; nothing less, and nothing more. Even still, they face physical intimidation.

Lousy players are money-makers, and good players are seen as costs. How damaging are good players to a casino’s economic interests? Not very, I would imagine. Card counting is legitimately hard. Don’t believe me? Try it, in a noisy environment where cards are dealt and discarded rapidly. Of course, most people know they will lose money, because gambling is a form of entertainment for them. Casinos will always make money, but it’s not enough to have 98 percent of the players be lousy ones. It’s better to select lousy players exclusively and toss the skilled ones out. “You’re too good for us. Don’t come back.”

In other words, the possibility of earning an edge through skillful play is used as a lure. Most people will never acquire such skill, and casinos can hardly be faulted for that.

Play too well, however, and you won’t have a spot at the table. Lousy players only. Sure, you can say that you beat the system. It might make for interesting discussion at a party, but your playing days are over. You’ve won, now go away.

SetBang 3: Fast(er) math in SetBang

SetBang, even when limited to the finite world, is inherently “inefficient”. We might like to live in a world where straightforward set operations can be performed quickly. Unfortunately, that’s not the case. Short programs with no obvious infinite loops can still take a long time.

Let’s restrict ourselves to hereditarily definite sets. A definite set in SetBang is that is derived from other finite sets, and hereditarily definite means that it’s a definite set whose elements are all (hereditarily) definite sets.

There are many finite and probably-finite sets that aren’t definite. For example, if we admit this macro:

:macro magic #003<[003<[~#1?(_\_\_2>'2>,__2>_12>0)]_2>(4<~5<2>/4>'4>_,4<'4>_)]_;~22>?(__0,_2003<[003<[~#1?(_\_\_2>'2>,__2>_12>0)]_2>(4<~5<2>/4>'4>_,4<'4>_)]_;[~~03>[\3<~3<[\_2>{'"}2>]_4<[~3<~4<2>4<.3>&{'"}]_3>2>]__3<~4>~3<~4<2>4<=(;;,_2>~3<~4<2>4<-3>2>-(~{}3<~4>{~3>2>~4>?(_",__0)}\2>[\3<&2>]_;(~"4<2>-3>2>-1[~3<~4<2>4<.3>&{'"}]_[~3<~4<2>4<.3>&{'"}]_,___0),_)(_1))(_~3<~4<2>4<(03>~{}3<~{}3<-#'[\_#3<~3<~4>[\_2>{'"}2>]_~5<~6>~3<~4<2>4<=(;;,_2>~3<~4<2>4<-3>2>-(~{}3<~4>{~3>2>~4>?(_",__0)}\2>[\3<&2>]_;(~"4<2>-3>2>-1[~3<~4<2>4<.3>&{'"}]_[~3<~4<2>4<.3>&{'"}]_,___0),_)(_1))(_3<~4>6<2>/5>4<2>~3<~4<2>4<-3>2>-(~{}3<~4>{~3>2>~4>?(_",__0)}\2>[\3<&2>]_;(~"4<2>-3>2>-1[~3<~4<2>4<.3>&{'"}]_[~3<~4<2>4<.3>&{'"}]_,___0),_)3<,__3>)]_;,2>)(__1[~3<~4<2>4<.3>&{'"}]_,____00),___10)]_)

we can create these sets:

:macro mersenne ${^\_#~:magic:(_",__0)}

:macro fermat ${^^'~:magic:(_",__0)}

:macro maybebot :fermat:`5_`1; 

What does the magic macro do? Its behavior is:

... N -> ... (1 if N is prime, 0 if N is composite).

As a result, mersenne produces the set of Mersenne primes, which is possibly infinite although that remains unknown, and fermat produces that of Fermat primes, of which there are probably exactly 5 (although there may be infinitely many). The set maybebot, derived from fermat, is nonterminating empty (“bottom”) if there are only 5 Fermat primes, and a one-element set if there are 6 or more. The last of these is a set that is trivially, provably, finite; but its derivation (ultimately from $) makes it a risk of non-termination. It’s finite, but not definite.

If we remove $ from SetBang, we can make every set not only definite but hereditarily definite (meaning that the elements are definite sets as well). We still have the potential for non-terminating computations due to [] loops, but otherwise, even if we use {}, everything will terminate.

Is SetBang still “inefficient”? Yes. So let’s talk about what an efficient SetBang would look like, and assume that we’re only working hereditarily definite sets (i.e. there is no $). An efficient SetBang would:

  • canonicalize sets so that set equality checks (=) can be performed in bounded time as a function of the size of the set.
  • perform membership checks (?) in bounded time.
  • represent structured large sets (e.g. 8^#^, the set of all subsets of {0, …, 255}) in an efficient way that doesn’t require storing the entire set.
    • The example above has 2^256 elements. We can’t store it in its entirety but, because it’s a highly structured large set, we can easily check for membership in it.
  • perform primitive operations (namely, \and /) in constant or logarithmic time.
  • “execute” {} comprehensions in a way (strict or eager) that doesn’t take too much time or space upfront, but that allows the resulting sets to be used efficiently, assuming that the code inside the comprehensions runs efficiently.

Is it possible, on some physical device possibly unlike an existing computer, to make an efficient SetBang, even excluding non-definite sets by removing $? It’s extremely unlikely, as we’ll see below.

Before we do that, we note that ordinals are an extremely inefficient way to represent numbers, so let’s use binary encoding to represent them as sets of ordinals instead. (An implementation, of course, can handle small finite ordinals as efficiently as if they were constants, urelements, or near equivalently, machine integers.) So we represent the number 37 as {0, 2, 5} (where the numerals on the right-hand side are still ordinals) as opposed to {0, …, 36}.

We could go further than binary and use von Neumann indices (or “hereditary binary” representations) defined like so:

I({}) = 0

I({a, b, …}) = 2^a + 2^b + …

This creates a bijection between the natural numbers and the hereditarily finite sets. Letting J be I’s inverse, we’d represent 37 as:

J(37) = {J(0), J(2), J(5)} = {{}, {J(1)}, {J(0), J(2)}} = {{}, {{J(0)}}, {{}, {J(1)}}} = {{}, {{{}}}, {{},{{{}}}}}

Why don’t we do this? In practice, I don’t consider it worth it. The multiple recursion of the index encoding and decoding complicate code considerably. Besides, an implementation can hard-code the first 256 ordinal constants and get the same efficiency (for numbers below 2^256) that we would have as if we treated them as urelements. While we gain efficient storage of certain “sparse” numbers (e.g. 2^(2^(2^(2^7 + 1))) + 23) by using hereditary binary, we complicate our algorithms (by, among other things, replacing single recursion with multiple recursion, requiring unpredictable amounts of stack space) and we still can’t do a lot with them. It does not enable us to make tractable operations out of those that would otherwise be intractable (e.g. precise arithmetic on such massive numbers).

With the binary representation, we have a compact addition function:

S∈tBang> :macro addbinary [~3<~4<2>4<.3>&{'"}]_
Stack: {0, 1, 6, 3, 5} {7, 4, 5, 8}
S∈tBang> :addbinary:
Stack: {0, 1, 4, 3, 9}

We see it correctly adding 107 + 432 = 539. The way it works is by repeatedly executing the behavior:

... X Y -> ... (X xor Y) (X and Y) -> ... (X xor Y) {z + 1 : z ∈ X and Y}

until the latter set (TOS) is zero/empty. This propagates the carries in the addition function, and it terminates when there are no more carries. It may not be the most efficient job, but it does the job in polynomial time (of the size of the sets, or the bit-sizes of the numbers).

With this, we can now illustrate concretely that our dream of efficient (as defined above) SetBang is probably not possible. Consider the following code.

^{:sumbinary:=}(1,0);;

with sumbinary (using a loop, but guaranteed to terminate in polynomial time) defined like so:

:macro sumbinary 02>[\3<:addbinary:2>]_

and the former’s behavior is

S X -> 1 if some subset Y of X has Sum(Y) = S, 0 if else.

This is the subset-sum problem, known to be NP-complete. Ergo, if P is not NP, an efficient SetBang implementation is not possible. Even by eschewing $, and restricting ourselves to the most trivial measurement of a set (“Is it empty?”) it will always be easy to write terminating but intractable programs.

In short, one could use laziness to make the {} operator “efficient” and simply not do gnarly computations until needed, and there’s room for cleverness around equality and membership checks (which often don’t require computation of the entire set) but even an empty-check or a \ creates a risk, even in the definite world, of exponential-or-worse time cost.

These results shouldn’t be surprising. Power sets are structured and one can therefore check quickly that, for example, {{{{5}}}} ∈ P(P(P(P(N)))), but they’re still very big. It’s possible, in principle, that SetBang could be implemented in such a way that a large number of complex queries happen efficiently. It’s just not possible, even with $ and [] taken out, to make SetBang fast in all cases. The language is too powerful.

Although there is much FUD around programming languages and speed, SetBang is an example of a language that is literally “not fast”.

Fast(er) math

On Github, I’ve included a library of faster math functions in resources/fastmath.sbg. These operate on numbers that have been converted to binary, like so:

Stack: 37
S∈tBang> :tobinary:
Stack: {0, 2, 5}

If you actually want to convert such numbers back into the more familiar ordinals, you can use frombinary. We’ve seen addbinary, and there isn’t much more to the other arithmetic operators: they’re standard “school book” implementations of subtraction, multiplication, and division-with-remainder on binary numbers. They improve the performance of our arithmetic considerably.

Recall that we previously had an implementation that would take 21 million years to determine that 65,537 was prime. We can now do it in about 30 seconds.

S∈tBang> 00/9'''''''/
Stack: {0, 16}
S∈tBang> :primebinary:
Stack: 1

That’s still a few hundreds of thousands (if not millions) of times slower than, say, trial division in C, but it’s an improvement over what it was.

Here’s a program that (quasi-)efficiently puts the set of all primes on the stack:

S∈tBang> ${~:tobinary::primebinary:(_",__0)}
Stack: {2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53, ...}

In fact, full SetBang allows one to put nearly any describable mathematical object on the stack. You can’t always do much with it, of course. Depending on what it is, you might not be able to do anything with it, as it could be:

{n where n is a Gödel number for a proof of theorem T}

which might equal {} when T is provably false (assuming a much smarter implementation of SetBang) but is “bottom” when T is unprovable.

SetBang 2: SetBang programming

In Part 1, I introduced SetBang, an esoteric language based on set theory. Here, I’ll write some SetBang programs, and show that it’s Turing Complete.

First, let’s talk about the “mystery operator” that I introduced: *, which is identical to:

(~#1=(_{}{}~,_~\2>\2>_.2>\2>\2>_&{}2>{}),0)

Remember that 2> is a swap sequence with behavior ... X Y -> ... Y X. You’ll see it a lot.

What does this * do? Let’s take it apart. It’s a conditional, noting the outer (), and the else-clause is simple: 0. So its behavior is:

... 0 -> ... 0 0.

In the then-case, we ~# TOS and equality-check against 1, and then go into another conditional based on that result. So, if TOS is a 1-element set, we execute _{}{}. The eats the boolean produced by the =:

... {e} 1 -> ... {e}

and then we do two {}{}, leaving us with:

... U(e).

In the else-case of the inner conditional, we have at least two elements.

... {e, f, ...} 0.

We the boolean and ~ the set {e, f, ...} and then use \2>\2> to extract its elements (remember that 2> are swaps) then _the remainder set.

... {e, f, ...} e f

We use . to compute the exclusive-or of e and f, and 2> it out of the way. Then we repeat the process using & for the intersection.

... (e . f) (e & f)

With the {} and swaps, we end up at:

... U(e & f) U(e . f)

It’s probably not clear what this does, so let’s look at how it operates when e = {x}, and f = {x, y}:

... 0 -> 0 0

... {{x}} -> x x

... {{x}, {x, y}} -> ... ({x} & {x, y}) ({x} . {x, y}) = ... x y

Ordered pairs

The set-theoretic ordered pair (x, y) is represented by {{x}, {x, y}}. The % operator builds ordered pairs and the * command destructures them. That’s why they exist in the language but, if they weren’t provided, they could be built from the other operators. They exists largely to make the language more convenient. 

Substitutions like this can, again, be tested at the SetBang repl like so:

S∈tBang> :test %* %(~#1=(_{}{}~,_~\2>\2>_.2>\2>\2>_&{}2>{}),0)
............... All tests passed.

We prefix both expressions with % because we only care about getting identical actions on ordered pairs. (The behavior of * is undefined on sets that aren’t either ordered pairs or empty.)

Using ordered pairs, we can build up linked lists, using {} for nil and (x, y), as designed above, for cons cells. We’ll use this in proving that SetBang is Turing Complete.

Arithmetic

SetBang can do arithmetic. Here’s a predecessor macro:

S∈tBang> :macro pred \_#
Stack: {7} {5}
S∈tBang> 6 :pred:
Stack: {7} {5} 5

Here are imperative addition and subtraction functions:

:macro swap 2>

:macro impPlus [:swap:':swap::pred:]_

:macro impMinus [:swap::pred::swap::pred:]_

with the caveat that minus is interpreted to mean limited subtraction (returning 0 when conventional subtraction would yield a negative number). These work, but we have the tools to do better. These macros, after all, use imperative loops. They’re not purely functional and they don’t invoke set theory.

We can get a truer, purer minus using -#, e.g. ... 7 4 -> ... {4, 6, 5} -> ... 3.

How do we get a purer addition function? This can be achieved using ordered pairs:

:macro plus 02>{2>~3>%"}3>_12>{2>~3>%"}2>_|#

How does it work? It uses {}-comprehensions to map over each set:

  • X -> {(0, x) for all x ∈ X}
  • Y -> {(1, y) for all y ∈ Y}

and then a union, followed by a cardinality check, can be used to perform the addition.

Cartesian products aren’t very hard to write either.

:macro prod {2>{2>%"}};

It has a set comprehension within another one. That’s not a problem. Let’s look at how it works. Starting from ... X Y, we go immediately into a loop over the Y-elements. We do a swap and start a loop on the X-elements, and have ... y x. The 2>is a swap, and % makes the pair, and "puts it in a set.

The behavior of the inner comprehension is ... y X -> ... y {(x, y) for all x ∈ X}.

One might expect, then, that the behavior of the outer loop should be ... X Y -> ... (X x Y). It’s close. The thing to remember though is that side effects on the stack that would be expected to move (and destroy) the X, such effects never happen. The stack state that exists when a % is executed is used for each “iteration” of the {} comprehension.

Thus, it’s not possible to populate the stack with a set using a program like {~"}. If you want to do that, you have to use the imperative []-loop, like in this program:[\2>]_, which iteratively applies \ to a set, and then deletes it when it’s empty.

To multiply numbers, there’s a simple program that does the job:

:prod:#, which macroexpands to {2>{2>%"}};#.

Can we divide? Yes, we can. Here’s one way to do it. It turns out to be unbearably slow, but it is mathematically correct:

:macro quot 2>~'3<2>~{3<:times:"}&\_#

Why is it slow? It gives us the following set-theoretic definition of division:

n quot k = #(n’ ∩ k*n’) – 1, e.g. 19 div 5 = #({0, … 19} ∩ {0, 5, 10, 15…}) – 1 = 4 – 1 = 3

Unfortunately, it’s O(n^3)– worse yet, not in the size of the number, but in the number itself. This is not an efficient division algorithm.

In fact, SetBang carries a persistent danger of inefficiency. Why is that? Well, let’s consider what hereditary finite sets are: Rose trees whose nodes contain no information. In Haskell, this could be implemented like what follows.

data Set = Empty | Node [Set]

or, equivalently:

RoseTree (), where

data RoseTree a = Leaf a | Branch [RoseTree a]

An implementation that uses shared data (and, as mine does, exploits numeric constants) is required; otherwise, you’ll have exponential storage just to represent the natural numbers. Given that there is no control over choice order (it’s implementation-defined) it is hard to stamp out the risk of unexpected exponential behavior completely (although we will not observe it in any example here).

One way to make arithmetic faster would be to use a different encoding than the ordinals. One candidate would be to use bit sets (e.g. 23 = {0, 1, 2, 4}) and write the arithmetical operators on those (as well as conversions both ways). Another would be to use von Neumann indices, where a hereditarily finite set’s index is computed as:

I({}) = 0

I({a, b, …}) = 2^a + 2^b + …

This function I is relatively easy to invert (call its inverse J). For example, we’d represent the number 11 not with {0, 1, …, 10} but with:

J(9) = {J(0), J(1), J(3)}

J(3) = {J(0), J(1)}, J(1) = {J(0)}, J(0) = {}, ergo:

J(11) = {{}, {{}}, {{}, {{}}}}

These sets are far more compact than the ordinals for the same numbers. Arithmetic could be construed to operate on numbers represented in this way, and would then take on a flavor of (much more efficient) binary arithmetic. We won’t be doing that here, though: it’s far too practical.

You can test for primality in SetBang:

:macro not 0=

:macro divides ~3<2>{2>:times:"};2>?

:macro prime ~2-{2>:divides:}:not:

Is it fast? No. It’s horribly inefficient– in the current implementation, it’s O(n^4), and takes 10 seconds to figure out that 23 is prime, so we can expect it to take a couple of days on 257, and 21 million years on 65,537– but it’s correct.

If one wished to put the entire set of primes (lazily evaluated) on the stack, one could do so:

${~:prime:(_",__0)}

which, in its full macroexpanded glory, becomes:

${~~2-{2>~3<2>{2>{2>{2>%"}};#"};2>?}0=(_",__0)}

I don’t recommend doing this. If you’re interested in working with large (6+ bit) prime numbers, I recommend representing the numbers in a more efficient way. Of course, that removes from SetBang its delicious impracticality, and suggests that one might use other languages altogether when one needs to work with large primes like 47.

The fact that there are five {}-comprehensions in the macro-less form above suggests that it is O(n^5) to compute the nth prime, and that’s about correct. (It’s slightly worse; because the nth prime is approximately n*log(n), it’s O(n^5*(log n)^4).) This might be one of the few ways in which SetBang is readable: nested {} comprehensions give you an intuitive sense of how cataclysmically inefficient your number-theory code is. And remember that n here is the number itself, and not the size (in, say, bits or digits) of the number.

Data structures

With % and * we have the machinery to build up linked lists. Let’s do that. This language obviously isn’t the best choice for number theory.

Let’s write some macros.

:macro BIG 3^^\_#

:macro u *2>~:BIG:?(__:BIG:,_')2>%

:macro d *2>\_#2>%

:macro l %

:macro r *

:macro p *2>~!2>*

:macro g *2>_@2>*

:macro w *2>[2>%

:macro x *2>]2>%

What do these do? Well, the first one, BIG, simply puts the number 255 (2^(2^3) – 1) on the stack. That’s the maximum value of a single unsigned byte.

Except for l, the remaining macros assume that TOS will be an ordered pair or {}, so let’s consider what might make that always true, in light of the other operators. To take note of an edge case, remember that * destructures an ordered pair, until we get to 0 = {}, when it behaves as ... 0 -> ... 0 0. This might suggest that these macros are intended for an environment in which TOS is always a linked list (possibly {}). That would be the correct intuition, and we can understand the first six operators in terms of their effects on the stack when TOS is a linked list.

  • u : ... (h, t) -> ... (h', t) where h' = min(h + 1, 255)
  • d : ... (h, t) -> ... (h*, t) where h* = max(h - 1, 0)
  • l : ... a (h, t) -> ... (a, (h, t)) and | (h, t) -> | (0, (h, t))
  • r : ... (h, t) -> ... h t and ... 0 -> 0 0
  • p : ... (h, t) -> ... (h, t) with #h printed to console
  • g : ... (_, t) -> ... (c, t) where c is read from console

It’s worth noting the behavior of l and r in edge cases. The edge case of l is when the stack is deficient, noting that % demands 2 arguments. Because SetBang left-fills with {}’s, the behavior is to add a {} to the linked list at TOS. The edge case of r is when TOS is 0, in which case we end up with another 0.

The remaining two macros, wand x, might look a little bit odd. Standing alone, neither is legal code. That’s OK, though. SetBang doesn’t require that macros expand to legal code, and as long as ws and xs are balanced, it will generate legal code. So let’s consider the expansion of wSx, where S is a string of SetBang code. The behavior of wSx, then, is a SetBang []-loop, but with the head of TOS (rather than TOS itself) being used to make the determination about whether to continue the loop.

We can now prove that SetBang is Turing Complete. Brainfuck is Turing Complete, and we can translate any Brainfuck program to a SetBang program as follows:

  • Start with a 0,
  • replace all instances of +, -, >, <, ., ,, [, and ] with :u:, :d:, :l:, :r:, :p:, :g:, :w:, and :x:, respectively. 

Therefore, if for some reason you wish not to write in SetBang, you can always write your program in Brainfuck and transliterate it to SetBang!

This proves that SetBang is Turing Complete, but that shouldn’t surprise us. It’s a fairly complex language, using every punctuation mark on the keyboard as a command. Powerful commands like % and * feel like cheating, and clearly the numerical commands aren’t all necessary: we can always write 0'''' instead of 4, for example.

So how much can we cut and still have a usable language? Which operators are necessary, and which ones can we do away with? And as we cut away more and more from the language, what does the code end up looking like? This is what we’ll focus on in Part 3.