Hello and welcome to Your Operations Solved, for Wednesday, June 2nd, 2021
I'm your host, Channing Norton, of PC Solutions, and this is the 24th episode of our show,
Listen to us Wednesday and Friday mornings at 9:30 Eastern, or on our bonus shows released on the 2nd Saturday of each month, at 2:30 PM. If you find the show helpful or informative, please do give it a like on your platform of choice, or share it to someone else who might also enjoy it.
If you have a problem in your business you want solved, email us at Solutions@youroperationssolved.com, we may just feature your business on our bonus show as we tackle it to help you and others.
With that out of the way, let's get started on today's headlines
First, an update to an existing story.
We've talked several times about Google's new targeted advertising technology FLoC, on this show, and the associated controversies in its potential to target ads in a predatory manner. In response to the backlash, Google has added a new setting to opt out of FLoC in the latest build of google chrome... if you're willing to dig for it in the obscure Chrome://flags page. Alternatively, one could use another browser, as both microsoft edge and Mozilla firefox block FLoC by default. Floc is currently in early stage trials affecting about .5% of browsers in selected regions.
With that out of the way, let's talk about our main story, turkish killbots.
Being that this is a technology show, I try to stay as FAR away from politics as possible. I, like anyone else, have my own political leanings, but, outside of specifically discussing policy affecting the tech space, I try to keep things out of the show. Even when politics does come up, I try to constrain my commentary in scope to how a change or event will affect my listeners in the small and medium business space. That being said, when I saw this story, it's egregiousness, and its potential to serve as a vector to talk about a topic that is often overlooked in the business space, I decided that this was a conversation I wanted to have with my listeners, even if it is a bit more controversial than I like to put in.
The UN has confirmed, as of a recently released report, that, back in march of 2020, Turkey deployed a fully autonomous weapons system in Libya. When I say, fully autonomous weapons system, I mean an Unmanned drone, that, armed with artificial intelligence, made decisions entirely without human input or confirmation as to if a target should be fired upon. If a person should live or die. This is a literal autonomous killbot. This is thought to be the first time such a weapons system has seen combat in a fully autonomous mode. Certainly its the first recorded case. As someone who dabbles in AI in their spare time, and interacts with software made by much smarter programmers than I at all stages of the software lifecycle, this terrifies me. Fully automated AI systems shouldn't be trusted with a number of far less impactful or permanent decisions than to end or not end a human life; I wouldn't trust an AI judge to handle a case of a traffic ticket without human oversight, let alone the decision of if someone should be killed. This isn't just my opinion, talk to anyone in the AI or software engineering spaces what their thoughts are on trusting software to run highly critical applications, like voting, or indeed war, and the near universal consensus is that these systems are not ready for prime time. Via how they operate on a technical level, their decisions are near impossible to audit, even for the engineers who designed them, they, like any software, have bugs that no amount of testing will ever uncover, and that's in an ideal case where we assume the code is secure, everyone involved in developing it is highly capable, that management doesn't rush project delivery or force any decisions upon the engineering team that negatively affected the quality of code delivered, that the physical constraints of sensors and cameras, processing power, or other hardware didn't result in tradeoffs being made, and that all the hardware works perfectly all the time. Needless to say, I doubt a single engineering product in the history of software development has been free of a single one of these concerns, let alone all of them. It's no wonder that the UN tried to ban systems like this back in 2018, though both Russia and the US exercised their Security Council veto power, leaving such systems as fair game in war. Yikes.
So, let's take the time we have left in our news segment to talk a little bit about AI ethics, why they are important for businesses, what biases in AI might look like, and how these systems make decisions. We will follow up on our friday episode with a larger conversation about AI, and the value it can bring to businesses.
Modern AIs are constructed with software patterns designed to mimic the structure of neurons in the human brain. However, the scale at which they do so is much, MUCH smaller than that of the brain. Where humans might have 100 Billion Neurons, these systems operate on anywhere from a dozen to a few thousand. This reduction in complexity is both a hardware constraint, simulating a full brains worth of neurons is still outside our reach without absurd hardware and absurd time, and a project management constraint; when we write code to make an AI work, a large part of that is defining how these neurons are capable of interacting with inputs, outputs, and each other. More neurons, more code, at least to an extent. In exchange for having less complexity than a human brain, we task these AIs with less complicated decisions. Rather than "What could possibly be causing this patients symptoms given their complex medical history, diet, vitals, medication, and response to prior treatment attempts, and what is the best treatment option for this diagnosis?" We ask such a system "Does this Xray show evidence of cancer?" Another important point is that these AIs are purpose built. An Xray reviewing cancer finding AI will be useless at say, identifying pedestrians in an image for a self driving car system, or even at identifying cancers in xrays on different parts of the body than what it was designed for.
The next portion of this is how these AIs learn what the correct answer to their question is. And that comes from human input. Generally speaking, we show AI systems a bunch of questions that have already been answered, and the software tunes itself to come to the same answer the training data provides. This is a HUGE source of malfunction in AIs today. For instance, give a bot that reviews resumes to find the best candidate for a job opening based off the performance review scores of your current staff, crossreferenced with the resume's they used to get hired, and you have the potential for the biases of your managers not involved in the resumebot project to be reflected in the bot as well. For instance, if, without realizing it, the managers of microsoft collectively rate female employees 5% lower on performance reviews than their male counterparts would receive for the same work, and the performance review history of microsoft is used as training data, an AI will see correlations in resumes, for instance "Applicants from this all women's college should be reduced in value, as existing employees from this college perform worse on average, if only slightly." Boom, your nonhuman system has been introduced to human biases. This isn't hypothetical, this is something we have directly seen, again and again, in all kinds of bots. From bots that identify people using facial recognition failing to correctly identify people of color due to training data being disproportionately white, to bots designed to help hiring managers wade through resumes not giving female applicants an even playing field, to bots designed for speech recognition failing when confronted with accents and speech impediments.
These concerns are important for businesses because, while AI seems like a great way to cut down on expensive human labor, its misuse can get a business into hot water. After all, imagine the press and lawsuits if microsoft were to implement a hypothetical hiring AI, and it was found to artificially decrease the hiring rate of women. At the end of the day, decisions made by AI need to be regularly and continually checked and audited, by humans, to ensure that the decisions reached are the decisions we want these tools to reach. So, lets take this all back to Turkey in Libya. The drones deployed have two modes, one that is fully autonomous, and one which functions closer to traditional weapon systems where it asks for human confirmation before taking any shots. This is, as far as the UN and press can identify, the first time any weapon system has been switched into a fully automatic mode like this, and, as we've begun to explore, there's simply no way that this system doesn't have a very real potential for false positives in identifying combatants, and the only way we can find these issues for sure is if we have such a false positive, which means someone is dead who shouldn't be.
With that done, lets get on with the discussion we began previously on technical debt, and look at specific sources of debt that I see a lot in small and medium businesses.
In writing the script for this episode, I reviewed the past few hundred tickets PC Solutions has received during the onboarding stage of our clients' journeys, as well as a selection from the previous weeks' tickets, and marked down which tickets were a symptom of technical debt to get some specific, real world examples. I identified 5 core areas where a lot of businesses are lacking, that produce additional labor or risk as a result.
Email.
Email email email email. There's a reason we've covered email in about four or five different shows at this point. It's an extremely powerful communication tool that, well, at this point, almost everyone hates. There's 3 mistakes I typically see in this areas
1. Not using a decent email host. There's really 3 good options for email hosts, anything else causes problems. One is Microsoft 365, Two is google apps for business, which is not as good, but workable, and hosting it yourself if you're a large enterprise with well over 500 mailboxes. Anything else is a huge mistake that will end up costing you money, and give you countless headaches with spam, email delivery, trouble organizing email, space constraints, and so much else. On top of that, a solid portion of the other services out there will actually charge you more than the best in class products. Don't bother doing anything else, its not worth it.
2. Poor email organization. See our episodes on organizing small business email. Without email being properly organized or managed, people run into missed emails, along with metric tons of emails they don't care about cluttering up their workspace.
3. Improperly configured DNS records. Do you find that a lot of your emails go unopened, or aren't received at all? Or that your recipients find emails from you in spam? This is likely why. A lot of the customers we bring on never had their email set up quite right, either to begin with, or to reflect a change in email hosts, and the end result is poor email delivery.
Documentation and policies.
The hidden magic that makes a system, any system, work well, consistently, is the documentation that backs it. Without proper documentation, businesses rely on what is effectively an oral history to determine how to operate. This is less than ideal, and makes scaling the business or any employee turnover unreliable and inconsistent. The end result is this causes problems. Different people do things different ways and interact with different people also doing different things in different ways, and your product or service suffers.
Business operational inefficiencies
A lot of businesses have debt in the area of their business operations, and how tasks are supposed to be performed. Something has become common practice that is not ideal, and results in more work on the backend, without the affected people even realizing it. For instance, a piece of software designed to track inventory might spit out spreadsheets with its results, that employees enter into a system to reorder products, without the business realizing that those two pieces of software can be made to interact, eliminating that task. Recurring data entry usually falls into the category of work that can be eliminated, and is costing your business money needlessly.
Single points of failure
Here we have a big one. Single points of failure. Anywhere where your business operates with a single person or device as the only safeguard against an area of the business not being able to work properly. Problems occur when the single point of failure either doesn't work, if its hardware, or leaves, or is simply out sick if its a person. How many tasks exist in your business today that only one person does or can do? These need to be made redundant ASAP
Filesharing
Filesharing. I cannot count the number of times I've gone on a first visit to someone's office, and overheard the sentence "Can I borrow your computer real quick, I need a file on it," or "Can you email that file to me?" These
That's our show for today, thank you so much for listening. Next time, join us for our conversation on AI, ethics and business. In the meantime, check us out on the web at www.YourOperationsSolved.com, where you can join our newsletter, and separately opt to be notified of all our uploads. I will see you next time.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More