<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI business Archives - ShiftMag</title>
	<atom:link href="https://shiftmag.dev/tag/ai-business/feed/" rel="self" type="application/rss+xml" />
	<link>https://shiftmag.dev/tag/ai-business/</link>
	<description>Insightful engineering content &#38; community</description>
	<lastBuildDate>Thu, 29 Aug 2024 13:36:38 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Two rules of AI business and startups that ignore them</title>
		<link>https://shiftmag.dev/two-rules-of-ai-business-and-startups-that-ignore-them-4109/</link>
		
		<dc:creator><![CDATA[Zeljko Svedic]]></dc:creator>
		<pubDate>Thu, 29 Aug 2024 13:02:25 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI business]]></category>
		<category><![CDATA[AI product]]></category>
		<category><![CDATA[AI startups]]></category>
		<category><![CDATA[large language models]]></category>
		<guid isPermaLink="false">https://shiftmag.dev/?p=4109</guid>

					<description><![CDATA[<p>Majority of AI entrepreneurs and engineers don’t pay attention to them, maybe because these rules show why their AI project will fail.</p>
<p>The post <a href="https://shiftmag.dev/two-rules-of-ai-business-and-startups-that-ignore-them-4109/">Two rules of AI business and startups that ignore them</a> appeared first on <a href="https://shiftmag.dev">ShiftMag</a>.</p>
]]></description>
										<content:encoded><![CDATA[<figure class="wp-block-post-featured-image"><img fetchpriority="high" decoding="async" width="1200" height="630" src="https://shiftmag.dev/wp-content/uploads/2024/08/SOTA.png?x91379" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="" style="object-fit:cover;" srcset="https://shiftmag.dev/wp-content/uploads/2024/08/SOTA.png 1200w, https://shiftmag.dev/wp-content/uploads/2024/08/SOTA-300x158.png 300w, https://shiftmag.dev/wp-content/uploads/2024/08/SOTA-1024x538.png 1024w, https://shiftmag.dev/wp-content/uploads/2024/08/SOTA-768x403.png 768w" sizes="(max-width: 1200px) 100vw, 1200px" /></figure>


<p>These rules are not new, and they are not mine; I stole them from <a href="https://en.wikipedia.org/wiki/Andrew_Ng" target="_blank" rel="noreferrer noopener">Andrew Ng</a> and <a href="https://www.ben-evans.com/" target="_blank" rel="noreferrer noopener">Benedict Evans</a>, two men with a huge following. </p>



<h2 class="wp-block-heading"><span id="ai%e2%80%99s-law-of-diminishing-returns">AI’s Law of diminishing returns</span></h2>



<p>To paraphrase Andrew’s words from <a href="https://www.coursera.org/specializations/deep-learning?msockid=354e180f1a3b6e2e10a40cbc1b8a6f0a">Coursera’s Deep Learning Specialization course</a>:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The effort to half an AI system&#8217;s error rate is similar, regardless of the starting error rate.&nbsp;</p>
</blockquote>



<p>This is not very intuitive. If an AI system passes 90% of test cases and errors on 10%, then you are 90% done, right? Fix the remaining 10% of errors, and you will have 100% accuracy? Absolutely not.</p>



<p>If it took you six months to halve the error rate from 20% to 10%, it will take you approximately another six months to halve 10% to 5%. And another six months to halve 5% to 2.5%. Ad infinitum. You will never achieve a 0% error rate on a real-world AI system. For an illustrative example, see this typical chart of error rate vs the number of training samples:</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="538" src="https://shiftmag.dev/wp-content/uploads/2024/08/LLM-error-rate-1024x538.png?x91379" alt="" class="wp-image-4111" srcset="https://shiftmag.dev/wp-content/uploads/2024/08/LLM-error-rate-1024x538.png 1024w, https://shiftmag.dev/wp-content/uploads/2024/08/LLM-error-rate-300x158.png 300w, https://shiftmag.dev/wp-content/uploads/2024/08/LLM-error-rate-768x403.png 768w, https://shiftmag.dev/wp-content/uploads/2024/08/LLM-error-rate.png 1200w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p><em>Credits: Zeljko Svedic</em></p>



<p>Notice that later in the training process, the training set size increases exponentially with each error rate halving, and <strong>the error rate never reaches zero</strong>. Sure, you will get more efficient with acquiring training data (e.g., by using low-quality sources or synthetic data). Still, it is hard to believe that acquiring 10X more data is going to be much easier than acquiring the initial set.&nbsp;</p>



<p>This rule becomes more intuitive when dissecting what an AI system error rate represents: uncovered real-world special cases. There are an infinite number of them. For example, one of the easiest machine learning (ML) tasks is <strong>classifying images of dogs and cats</strong>. It is an introductory task with <a href="https://wtfleming.github.io/blog/pytorch-cats-vs-dogs-part-3/" target="_blank" rel="noreferrer noopener">online tutorials that get 99% accuracy</a>. But solving the last 1% is incredibly hard. For example, is the <a href="https://s.abcnews.com/images/Lifestyle/HT-cat-dog-02-jef-161110_16x9_992.jpg" target="_blank" rel="noreferrer noopener">creature in this image</a> a dog or a cat?</p>



<p>It is <a href="https://www.atchoumthecat.com/" target="_blank" rel="noreferrer noopener">Atchoum, the cat</a>, who rose to fame because half of the humans recognized him as a dog. The human accuracy on dog/cat classification within 30 seconds is <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2007/10/CCS2007.pdf">99.6%</a>. A dog/cat classifier with less than a 0.4% error rate would be superhuman. But it is possible. A training set with hundreds of thousands of strange-looking dogs and cats would teach a neural network to focus just on details encoded in dog or cat chromosomes (e.g. <a href="https://en.wikipedia.org/wiki/Cat_senses#Sight">cat eyes</a>).</p>



<p>However, building such a dataset is orders of magnitude more complex than a tutorial with 99% accuracy. Other problems lurk in that 1% error rate: photos that are too dark, photos in low resolution, photo compression artifacts, photo post-processing by modern smartphones (adding of non-existing details), dogs and cats with medical conditions, etc. The problem space is infinite. This is still considered a solved ML problem, though, because a <strong>1% error rate is low enough for all practical purposes.&nbsp;</strong></p>



<p>But for some problems, even a 0.01% error rate is not satisfactory, for example: full-self driving (FSD). Elon Musk said in a <a href="https://fortune.com/2015/12/21/elon-musk-interview/?xid=yahoo_fortune#:~:text=%E2%80%9CWe%E2%80%99re%20going%20to%20end%20up%20with%20complete%20autonomy%2C%20and%20I%20think%20we%20will%20have%20complete%20autonomy%20in%20approximately%20two%20years.%E2%80%9D">2015 article with Forbes</a>:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>We’re going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years.</p>
</blockquote>



<p>Tesla was so confident in that prediction that they started <a href="https://en.wikipedia.org/wiki/Tesla_Autopilot#History" target="_blank" rel="noreferrer noopener">selling a full self-driving add-on package in 2016</a>, and they weren’t the only ones. <a href="https://en.wikipedia.org/wiki/Kyle_Vogt" target="_blank" rel="noreferrer noopener">Kyle Vogt</a>, CEO of Cruise, wrote a piece called <a href="https://medium.com/cruise/how-we-built-the-first-real-self-driving-car-really-bd17b0dbda55" target="_blank" rel="noreferrer noopener">How we built the first real self-driving car (really)</a> in 2017, in which <a href="https://medium.com/cruise/how-we-built-the-first-real-self-driving-car-really-bd17b0dbda55#:~:text=the%20most%20critical%20requirement%20for%20deployment%20at%20scale%20is%20actually%20the%20ability%20to%20manufacture%20the%20cars%20that%20run%20that%20software">he claimed</a>:</p>



<p><em>“&#8230;the most critical requirement for deployment at scale is actually the ability to manufacture the cars that run that software”</em></p>



<p>So, the software and the working prototype are done; they just need to mass-produce “100,000 vehicles per year.”&nbsp;</p>



<p>Fast forward to 2024. Elon Musk&#8217;s predictions for autonomous Tesla vehicles deserved a <a href="https://en.wikipedia.org/wiki/Criticism_of_Tesla,_Inc.#Musk's_promises" target="_blank" rel="noreferrer noopener">lengthy Wikipedia table, mostly in red</a>:&nbsp;</p>



<figure class="wp-block-image"><img decoding="async" src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXc2IyKAam_bocLaKBZ5nziCHufwEBeM9W9VmNYySzr2GayicGYxRsFpMOqFznvxwUI_EP3BdfY4sHmlv2t1QCAEGSXEz2s3IO_jjEZU16Yp99D9ttXsZqHVjowRwmzLmW8d6CdX8BIYTFoj8JXduOYO4C1_?key=qkGfQMxmVtI818WaLgV3Aw" alt=""/></figure>



<p>Credits: Wikipedia</p>



<p>What about Kyle Vogt? In October of 2023, Cruise’s car <a href="https://archive.ph/yQMOS" target="_blank" rel="noreferrer noopener">dragged a pedestrian for 20 feet</a>, after which California’s DMV suspended Cruise’s self-driving taxi license. Kyle <a href="https://archive.ph/SEeP3" target="_blank" rel="noreferrer noopener">“resigned” as CEO</a> in November 2023.&nbsp;&nbsp;</p>



<p>Don’t misunderstand me—I believe autonomous cars will have a significant market share, probably in the next decade. The failed predictions above illustrate what happens when <strong>entrepreneurs don’t respect the AI law of diminishing returns.</strong> </p>



<p>Elon and Kyle probably saw a demo of a full-self-driving car that could drive on its own on a sunny day on a marked road. Sure, a safety driver needed to intervene sometimes, but that was only 1% of the drive time. It is easy to conclude that “autonomous driving is a solved problem,” as Elon said in 2016. </p>



<p>Notice how ML scientists and engineers didn’t make such bombastic claims. They were aware of many edge cases, some of which are described in crash reports. Edge cases include:</p>



<ul class="wp-block-list">
<li>A<a href="https://www.wsj.com/articles/video-shows-final-seconds-before-fatal-uber-self-driving-car-crash-1521673182" target="_blank" rel="noreferrer noopener"> pedestrian crossing a two-lane avenue with a bicycle at night and without lights</a> (2018 Uber crash).</li>



<li>The<a href="https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashes#Mountain_View,_California,_USA_(March_23,_2018)"> lane’s white lines diverging before a barrier</a> (2018, Tesla Model X crash).</li>



<li>A<a href="https://www.washingtonpost.com/technology/interactive/2023/tesla-autopilot-crash-analysis/"> white trailer truck against the white sky</a> (2019 crash, Tesla Model 3 decapitated the driver and continued driving for 40 seconds).</li>



<li>A vehicle kicking a pedestrian under an FSD car (2023 Cruise incident mentioned above).</li>
</ul>



<p>Why so many companies promised a drastic reduction in self-driving error rates in such a short time without having a completely new ML architecture is an open question. Scaling laws for convolutional neural networks have been known for some time, and the new transformer architecture obeys a similar scaling law.&nbsp;</p>



<h2 class="wp-block-heading"><span id="ai%e2%80%99s-product-vs-feature-rule">AI’s Product vs Feature rule</span></h2>



<p>When is an AI system a good stand-alone product, and when is it just a feature? In the words of Benedict Evans from <a href="https://another-podcast.simplecast.com/episodes/ai-summer-rzubFrA4" target="_blank" rel="noreferrer noopener">The AI Summer</a> podcast: “Is this feature or a product? Well, <strong>if you can’t guarantee it is right</strong>, it’s a feature. It needs to be wrapped in something that manages or controls expectations.” I love that statement. The “it is right” part can be broken down using the error rate:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>If your AI system has a higher error rate than target users, you have an AI feature in an existing workflow, not a stand-alone AI product.</p>
</blockquote>



<p>This rule is more intuitive than the law of diminishing returns. If target users are better at a task, they will not like stand-alone AI system results. They could still use AI to save them effort and time, but they will want to <em>review</em> and <em>edit</em> AI output. If AI completely fails at a task, humans will use <em>the old workflow and the old software</em> to finish the task.</p>



<p>Let&#8217;s take MidJourney, for example, which generates whole images based on a text prompt. When I used it for a hobby project last year, satisfying artistic images appeared instantly, like magic. But then I spent hours fixing those creepy hands:</p>



<figure class="wp-block-image"><img decoding="async" src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcnFrI68QKxjoIwDwyYsIEttVNjCoNIX7cfKfQppjp0i8ahHu1XL1iDPReK8XXKuiargkOWcr0rKaSpTSyuR2S_SHQfk7XefYely-tnSa79r8TOAAnzxeLAlClOyO4lyYGBa4fKAICZZTYSDeOc0sIVHJos?key=qkGfQMxmVtI818WaLgV3Aw" alt=""/></figure>



<p>Credits: CC0 via <a href="https://neural.love/ai-art-generator/1ed5b225-76ef-66f4-b430-cbab93677b76/hand-shaking-hand">neural.love</a>&nbsp;&nbsp;</p>



<p>Each time MidJourney created a new image, one of the hands had strange artifacts. Finally, it generated an image with two normal hands—but then it destroyed the ears. The problem was less with wrong details and more with bad UI, which didn’t allow correction of the AI’s mistakes.</p>



<p>Adobe’s approach is different—it treats <a href="https://blog.adobe.com/en/publish/2023/05/23/future-of-photoshop-powered-by-adobe-firefly" target="_blank" rel="noreferrer noopener">generative AI as just one feature</a> in its product suite. You use an existing tool, select an area, and then do a generative fill:</p>



<p><img decoding="async" src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfz1gz_voz1VJpnjvSYhiPV1ThH5PrJmo2AORrdOv9Lt0s83iYo2-k7QabtzcGpZopKt_-lfOpAGC9tf08RWrG0LhOqs5U3jK42F9AWkPo2yFjAmobH7BofcBG9URMDJXKTgZdt6bFZoBhxpqLZPQ7hoX4Z?key=qkGfQMxmVtI818WaLgV3Aw" width="315" height="178"><img loading="lazy" decoding="async" src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXd0fuRaZ1Z9s2Nb4LxSgLZlxOLlBvth7TjKQI_RiLOSGttE4lWMao5ppUIy-nhguf1udR3hY55KiclvABwNX8Cch5LnaVBO3lwA3oC8CDuRuU37-PrhRRCY5g2GtUQUSa4606smSgHUwhQFUVf7JvX1ZYNi?key=qkGfQMxmVtI818WaLgV3Aw" width="280" height="178"><br>Credits: <a href="https://news.adobe.com/news/news-details/2023/Adobe-Unveils-Future-of-Creative-Cloud-with-Generative-AI-as-a-Creative-Co-Pilot-in-Photoshop-default.aspx/default.aspx">Adobe press release</a></p>



<p>You can use it for the smallest of tasks, like removing cigarette butts from grass in a wedding photoshoot. If you dislike AI grass, no problem—revert to the old designer joy of manually cloning grass. Also, Adobe Illustrator has<a href="https://www.theverge.com/2024/7/23/24204231/adobe-photoshop-illustrator-generative-ai-firefly-vector-features"> generative Vector AI</a> that generates vector shapes you can edit to your liking.</p>



<p>MidJourney makes more impressive demos, but Adobe’s approach is more useful to professional designers. That doesn’t mean MidJourney doesn’t make sense as a product; its target users are the ones who don’t care about details. For example, last Christmas, I got the following greetings image over WhatsApp:</p>



<p><img loading="lazy" decoding="async" src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfx4KDbQOwxyuU_9gwupCnZ_XYyqNbi4Qzld9xLSyi8-V47xOXBVOhtTu-KtdwygTNPnr9TgqIqInO3C5kIXrGk0vxMjy13O7nBdYSrf3NaO7BgjrOytbgQMLss7ZIev85Xnq7W9Y1zFuN3fzNy9SXzgBfP?key=qkGfQMxmVtI818WaLgV3Aw" width="421" height="497"><br>Credits: Zeljko Svedic</p>



<p>Did you notice baby Jesus&#8217; hands and eyes? Take another look:</p>



<p><img loading="lazy" decoding="async" src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeneOxawFxlDXPURQN_fxRZrlyW-PsKhMPPDqp2F8aoioC2zzngMiM09_uoJlRwCKwMEqqEyUdO0-6ENGkLrCxO0DX82AHLMUhvZAmIUwV2-mCau3dn4gKUJfy6DLSbCD7l1oQ2xSgedQ_Nl1vR5Tzjt-8l?key=qkGfQMxmVtI818WaLgV3Aw" width="342" height="240"><br>Credits: Zeljko Svedic</p>



<p>That would never pass with a designer, but that is not the point. There is a whole army of users who don’t care about image composition and details; they just want images that go with their content. In other words, MidJourney is not a replacement for Adobe’s Creative Suite—it is a replacement for stock photo libraries like Shutterstock and Getty Images. And judging by the recent popularity of AI-generated images on social media and the web, people like artsy MidJourney images more than stock photos.</p>



<p>Low-hanging fruit in stand-alone AI products are use cases where a high error rate doesn’t matter or is still better than the human error rate. An unfortunate example is guided missiles; in the Gulf War,<a href="https://insidedefense.com/inside-navy/navy-says-fewer-60-tomahawks-were-successful-gulf-war"> the accuracy of Tomahawk missiles was less than 60%</a>. But the army was happy to buy Tomahawks because they were still much more accurate than older alternatives, as <a href="https://www.cna.org/our-media/indepth/2021/02/wrong-war-right-weapons">fewer than 1 in 14 unguided bombs hit their targets</a>.</p>



<h2 class="wp-block-heading"><span id="evaluating-startups-based-on-the-above-rules">Evaluating startups based on the above rules</span></h2>



<p>The great thing is that error rates are measurable, so the above rules give a framework to judge an AI startup quickly. Below is a simple startup example.</p>



<p>Devin AI made quite a splash in <a href="https://x.com/cognition_labs/status/1767548763134964000">March of 2024 with a video demo of developer AI</a> that can create fully working software projects. From the announcement Devin was “evaluated on the <a href="https://www.swebench.com/" target="_blank" rel="noreferrer noopener">SWE-Bench</a>” (relevant benchmark), and “correctly resolves 13.86% of the issues unassisted, far exceeding the previous state-of-the-art model performance of 1.96% unassisted.” So, the current state-of-the-art (SOTA) has a 98% error rate, and they claim to have an 86% error rate. Even if that claim is valid (it wasn’t independently verified), why do their promo videos show success after success? It turns out that the video examples were <a href="https://80.lv/articles/first-ai-software-engineer-creators-are-accused-of-lying/">cherry-picked, the task description was changed, and Devin took hours to complete</a>.</p>



<p>In my opinion, Microsoft took the right approach with GitHub Copilot. Although <strong>LLMs work surprisingly well for coding</strong>, they still make a ton of mistakes and don’t make sense as a stand-alone product. Copilot is a feature integrated into popular IDEs that pops up with suggestions when they are likely to help. You can review, edit, or improve on each suggestion.&nbsp;&nbsp;</p>



<p>Again, don’t get me wrong. I think coding SOTA will drastically improve over the next few years, and one day, <strong>AI will be able to solve 80% of GitHub issues</strong>. Devin AI is still far away from that day, although the company has<a href="https://fortune.com/2024/03/31/cognition-labs-ai-startup-seeks-2-billion-valuation-investor-frenzy-warnings-bubble/" target="_blank" rel="noreferrer noopener"> a valuation of $2 billion in 2024</a>.</p>



<p>More formally, the framework for evaluation is:</p>



<ol class="wp-block-list">
<li>Find a <em>relevant benchmark</em> for a specific AI use case.&nbsp;</li>



<li>Find the current <em>state-of-the-art (SOTA) error rate</em> and <em>human error rate</em> on that benchmark.</li>



<li>Is the SOTA better or comparable to the human error rate?
<ol class="wp-block-list">
<li>If yes (unlikely): Great, the problem is solved, and you can create a stand-alone AI product by reproducing SOTA results.</li>



<li>If no (likely): Check if there is a niche customer segment that is more tolerant of errors. If yes, you can still have a niche stand-alone product. If you can’t find such a niche, go to the next step.</li>
</ol>
</li>



<li>You can’t release a stand-alone AI product. Wait for SOTA to get better, pour money into research, or go to the next step.</li>



<li>Think about how to integrate AI as a feature into the existing product. Make it easy for users to detect and correct AI’s mistakes. Then, measure AI’s return on investment:<br><br>AI_ROI = Effort_saved_by_accurate_AI_responses / Effort_lost_on_checking_and_modifying_AI_responses<br><br>If too much user time is spent checking and correcting AI errors (AI_ROI &lt;= 1), you don’t even have a feature.<br></li>
</ol>



<p>Or, to summarize everything discussed here in one sentence:</p>



<p><img loading="lazy" decoding="async" src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfRoAPSKKycPMv7z4mWx-DgZW9MjVyqCyL0z69kGNmxalaB6ziD-xwt36610TWwx4Xc54UkzU7VuYFENbmZTbn8kPiNEx3Y14mqWxbh2d50OT-1RG-yFbO9wTBnxiI7jqodDdNiNm5Q2MTclmt-ehvH09Mj?key=qkGfQMxmVtI818WaLgV3Aw" width="333" height="419"><br>Credits: Zeljko Svedic</p>



<p>Every innovative AI use case will eventually become a feature or a product once the error rates allow it. If you want to make it happen faster, become a researcher. </p>



<p><a href="https://openai.com/index/team-update-january/">OpenAI’s early employees</a> spent seven years on AI research before overnight success with ChatGPT. Ilya Sutskever, OpenAI’s chief scientist, still didn’t want to release ChatGPT 3.5 because <a href="https://www.businessinsider.com/chatgpt-was-inaccurate-boring-when-it-launched-openai-cofounder-2023-10#:~:text=OpenAI%27s%20chief%20scientist%20admitted%20that%20he%20didn%27t%20think,was%20taken%20by%20surprise%20by%20its%20explosive%20popularity.">he was afraid it hallucinated too much</a>. Science takes time.</p>



<p><em>If you found this article useful, please share.</em></p>
<p>The post <a href="https://shiftmag.dev/two-rules-of-ai-business-and-startups-that-ignore-them-4109/">Two rules of AI business and startups that ignore them</a> appeared first on <a href="https://shiftmag.dev">ShiftMag</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 

Served from: shiftmag.dev @ 2026-04-30 09:47:33 by W3 Total Cache
-->