The Pink Datacenter – 1.5 – The Telco “startup”

5     The Telco “startup”

The technology

In 1997, I stepped into the world of telecommunications, where one of the key (and first) players was the AT&T Definity switch, later known as Lucent Technologies’ Definity switch, after a company’s spinoff for their enterprise communications products. This switch was a powerhouse in the realm of telephony, designed to handle both analog and digital telephony and to interface with various adjuncts, including servers equipped with specialized software for reporting, business intelligence (BI), messaging, and Interactive Voice Response (IVR).

The Definity switch was the heart of many corporate phone systems in its heyday. It was a Private Branch Exchange (PBX) system that allowed organizations to manage their phone lines, extensions, and calls internally. What made the Definity switch so versatile was its ability to handle both analog and digital telephony.

Analog telephony, characterized by the use of electrical signals to transmit voice and data, was still common in 1997. Many telephones and fax machines used analog connections, and the Definity switch had the capability to interface with these analog devices seamlessly. It could route analog calls, manage voice mailboxes, and even handle the transfer of faxes between users (only through its external messaging system).

On the other hand, digital telephony, which involved encoding voice and data into binary code for transmission, was emerging as the future of telephony. The Definity switch excelled in this domain as well. It could manage Digital Signal Processor (DSP) resources for tasks like voice compression and encryption. The switch could interface with digital telephones and provide features like call waiting, call forwarding, and three-way calling, as long as 6 party conference calls and at some point, a whole deal of call center logic and queuing directly within the switch. VoIP and IP telephony were still far in sight…

The true power of the Definity switch lay in its ability to interface with adjuncts, specialized servers that hosted software applications designed to enhance the telephony experience. These adjuncts were connected to the switch via serial connectivity, allowing for real-time data transfer and communication.

  • Reporting and Business Intelligence (BI) Servers: These adjuncts were essential for organizations seeking to analyze their call data. They collected and processed information related to call volumes, call durations, and other call statistics. The specialized software running on these servers could generate detailed reports, enabling businesses to make informed decisions regarding their telephony systems. With access to these reports, organizations could optimize their call center operations, improve customer service, and identify areas for cost reduction. The CMS (Call Management System) was my pride and joy, as it was hosted on a Sun Solaris platform where we would provide the hardware and the software (preinstalled) to our premium customers.
  • Messaging Servers: Messaging was a crucial aspect of telephony in the late ’90s. Organizations relied on messaging servers to handle voicemails and fax messages. The Definity switch interfaced with these servers, routing voicemails and faxes to the appropriate user’s mailbox. Users could access their messages through their telephones or, in some cases, through computer-based applications. Messaging servers streamlined communication within businesses, allowing for quick and efficient message retrieval. Here we were talking about a SCOUnix server, with our proprietary software named Audix that would handle all the real-time requests coming to and from the PBX. The server also hosted the users mailboxes, with their voice and fax messages that could then be backed up on tape.
  • Interactive Voice Response (IVR) Servers: IVR servers were at the forefront of customer service and call center automation. The Definity switch integrated with these servers to provide automated responses and gather information from callers. The specialized software on these servers, named Conversant, enabled businesses to create interactive menus, handle customer inquiries, and route calls to the appropriate agents. This technology significantly improved call center efficiency and customer satisfaction and was obviously hosted on SCOUnix OS.

The serial connections between the Definity switch and these adjuncts were crucial for data exchange. They allowed the switch to communicate with the servers, providing information about call routing, call statuses, and user preferences, along with voice messages and additional data transmitted. This integration was the backbone of efficient telephony operations and was instrumental in creating a seamless and productive communication environment within organizations.

The AT&T-then Lucent Technologies Definity switch, with its capability to handle analog and digital telephony, and its ability to interface with specialized adjuncts via serial connections, played a vital role in shaping the telecommunications landscape of its time. It empowered companies to manage their phone systems effectively, providing the foundation for more advanced telephony solutions that have since evolved with the ever-changing technology landscape.

We started talking about VoIP and IP communications only around the years 2000-2001, where this new technology would disrupt the voice lords in more than one way… but back then, this was still almost sci-fi.

You start working in 1997 on an old at&t, now lucent technologies, Definity switch. describe how the switch works with analog and digital telephony and how it interfaces via serial connection to special adjuncts, servers that have a specialized software (either for reporting and bi, or for messaging, or for IVR).
the prompt

The Pink Datacenter – 1.1 – How this all started

Chapter 1: First steps, baby steps

1. How this all started

In the vast lecture hall housing 400 computer science engineering enthusiasts, there I stood—an 18-year-old embarking on a journey into the world of code and circuits. Alongside me (or better, in the other side of the hall) was the only other intrepid woman venturing into this uncharted territory. The crowd, a delightful mix of funny and brilliant minds, surrounded us like eager explorers ready to forge friendships.

As the intricate dance of university life unfolded, I found myself drawn to a specific group. They extended an invitation to join their digital haven, “The Golem’s Tavern,” nestled within the expansive realm of Fidonet. Eager to unravel the mysteries of this BBS kingdom, I sought guidance from the sysop, spending more than an afternoon immersed in the arcane rituals required to access this digital oasis.

My initiation into the digital realm began with the acquisition of a Linux computer, or better a repurposing of my DOS PC. Opting for a dual-boot setup with LILO, I aimed to maintain an air of innocence, ensuring my parents remained blissfully unaware of my rebellion against the omnipresent DOS. The Linux installation, delivered via 8 not-so-floppy 2.5″ disks, demanded the ritualistic act of compilation to breathe life into the operating system.

Next on the agenda were the modem drivers for my state-of-the-art Zyxel modem, a 1200 baud marvel of modern technology. GoldED, the preferred tool for editing messages, and Frontdoor, the gateway to the BBS, completed the ensemble. I proudly claimed the title of point 2:331:311.29 of Fidonet—a digital address that felt like a secret key to an alternate reality. To my kids today I say: these messages were asynchronous, but you could still connect every five minutes and make it near-real-time. Playing VGAplanets you just had to upload and download in the proximity of the server run.

The BBS became my sanctuary, a space where the introverted corners of my mind could unravel freely. In the pixelated expanse of the digital tavern, I connected with individuals who would become my friends for life. Our conversations spanned the spectrum from code snippets to late-night musings, and I reveled in the camaraderie fostered by our shared digital realm. My aka was Sherazade, loosely inspired by the heroin of One thousand and one nights.

Then came the pivotal moment—an announcement of a meetup. In the absence of the modern “meetup” designation, our rendezvous was a straightforward plan for pizza. Little did we know it would evolve into a water-drenched spectacle, echoing that first unconventional gathering of S2.E8 of the great “Halt and Catch Fire.” TV Show. I have to admit that when I saw the episode many years later, my eyes were watery and my heart skipped a beat just remembering the feeling.

The day arrived, a collision of digital avatars stepping into the corporeal world. The awkwardness of the initial encounter mirrored the scenes from the TV show, with an added touch of extraordinary weirdness. These were people I intimately knew from the depths of our online conversations, and yet, the physical connection was a revelation.

Dialogues and silences danced through the air like packets in cyberspace, each sentence a testament to our shared digital history. Some connections sparked into real-life friendships, while others fizzled out in the unpredictability of face-to-face chemistry. But every moment was tinged with amazement, an affirmation of the extraordinary journey from bits and bytes to handshakes and shared pizzas.

As the jars of water rained down in the restaurant (we were subsequently banned from it), laughter echoed the sentiment that this was a meeting of kindred spirits—geeks, nerds, and digital denizens turned friends, bound by the tapestry of our shared online escapades, testament the pure fact of being able to be there, having faced the hardship of a 1992 Linux computer. The meetup concluded not just with wet clothes but with the assurance that the friendships forged in the digital tavern were resilient enough to withstand the transition to the tangible world.

In the end, “The Golem’s Tavern” wasn’t just a BBS; it was a digital sanctum that transcended the confines of code and connected us in ways that defied the limitations of the screen. It was a celebration of the quirks, the bytes, and the friendships that bloomed in the virtual realm, leaving an indelible mark on the landscape of our university years in the early days of the 90′ decade.

It’s 1992 – you are 18 and start university – you chose to major in computer science engineering. In a 400 people course there are 2 women, including yourself. The rest of the crowd is made of funny and intelligent nerds who pamper you and all want to get to know you. You are always approached by new classmates who want to know you as it’s only you, the redhead, and Simona the blonde, these weird creatures in a land of boys. At some point you get hooked on a specific group and they invite you to join their BBS. It is under Fidonet and is called “The Golem’s tavern”. You ask for some help to the sysop and spend an afternoon with him explaining all the steps to get there: first, you need a Linux computer, better if it’s double booted with Lilo, so your parents don’t know that you got rid of the DOS. The Linux install comes in 8 floppy disks (which are already the 2.5″ so technically they are not floppy, but still annoying) that you must COMPILE for the OS to work. then it’s the turn of the modem drivers to connect to the phone line. you have a 1200 baud – or bps – Zyxel modem that is one of the latest models. you use GoldED to edit messages and Frontdoor to connect to the BBS. You are point 2:331:311.29 of Fidonet and a world suddenly opens up to you where you find yourself free to express yourself without the constraints of your introvert mind. You are able to really connect to some people in the group in a weird and deep way, and they become your friends for life. At some point when they organize a meetup (it was not called meetup at the times, we just went for pizza that ended up with us throwing buckets of water to each other at the restaurant) it is basically like the scene in “Halt and Catch Fire” when the BBS people finally meet in person: weird and extraordinary, that feeling you are among your bunch. You know these people intimately and deeply from your online conversations and yet there is no physical connection until you meet them. In some cases, this sparks to life, in other it just doesn’t click, but in the end, it is amazing all the way. Write in nerdy, techie, funny tone, with lots of details on the software and gear, and using dialogues for the meetup.

The Pink Datacenter – my book

📚 Introducing my New Book: “The Pink Datacenter”, written this last November for the NaNoWriMo initiative with the help of my friend the AI chatbot.

🚀 Over the years, I’ve accumulated a treasure trove of stories, anecdotes, and insights from the world of technology, and I can’t wait to share them with you. From the early days of the internet to the cutting-edge innovations of today, my book chronicles the adventures, challenges, and triumphs of navigating the ever-evolving landscape of tech with the wide-open eyes of a girl who loves technology.

📖 Starting this month, I’ll be publishing excerpts from the book right here on my blog, one paragraph at a time. Each snippet will offer a glimpse into the fascinating world of tech, with humorous anecdotes, witty observations, and valuable lessons learned along the way. Whether you’re a seasoned tech enthusiast or simply curious about the inner workings of the industry, or in search of a #girlintech role model, I guarantee there will be something for everyone in “The Pink Datacenter.”

✨ Stay tuned for my first installment this week, and get ready to embark on a journey through the highs and lows of the tech world like never before. With “The Pink Datacenter,” I invite you to join me as I explore the past, present, and future of technology, one paragraph at a time.

The Carbon Monkey

How principles of Chaos Engineering and using carbon monkeys to simulate real-life energy events help us achieve our sustainable software engineering goals.

Photo by Singkham on Pexels.com

According to Principles of chaos engineering, Chaos Engineering is the discipline of experimenting on a system in order to build confidence in that system’s capability to withstand turbulent conditions in production. I have followed this discipline through the years finding it fascinating, especially when applied to large scale applications and systems. As the site explains:

“Even when all of the individual services in a distributed system are functioning properly, the interactions between those services can cause unpredictable outcomes. Unpredictable outcomes, compounded by rare but disruptive real-world events that affect production environments, make these distributed systems inherently chaotic.

We need to identify weaknesses before they manifest in system-wide, aberrant behaviors. Systemic weaknesses could take the form of improper fallback settings when a service is unavailable; retry storms from improperly tuned timeouts; outages when a downstream dependency receives too much traffic; cascading failures when a single point of failure crashes; etc. We must address the most significant weaknesses proactively, before they affect our customers in production.

We need a way to manage the chaos inherent in these systems, take advantage of increasing flexibility and velocity, and have confidence in our production deployments despite the complexity that they represent. An empirical, systems-based approach addresses the chaos in distributed systems at scale and builds confidence in the ability of those systems to withstand realistic conditions. We learn about the behavior of a distributed system by observing it during a controlled experiment. We call this Chaos Engineering.”

Build a Hypothesis around Steady State Behavior

Let’s start with the first step: a steady state behavior is the condition our application should aspire to be in. If we translate this principle into a sustainable one, this becomes the most beautiful and efficient state of an application: one where no energy is wasted, and efficiency and performance is at its best.

Call for more “carbon monkeys”

The most difficult part is how to measure and set this initial state. My colleagues have shared numerous ideas on the Sustainable Software Engineering blog that might help you jumpstart your measurement. However, I feel that at some point, this will have to reach a standardized and widely accepted form where we have a “carbon limit” where an application is considered inefficient and not sustainable.

Vary Real-world Events

This is the principle that represents how close chaos engineering and sustainable software engineering are. There is no steady and predictable flow of energy coming from the same renewable source. From the challenging big picture of using solar, wind or hydro energy down to when we plug our device into the outlet, we still have limited ways to retrieve exactly how the energy that is powering the device is produced in that exact moment in time. Doing so precisely requires considering things like seasonality, time of day, peak hours as well as weather conditions that trigger renewable power supplies usage. The variables around this concept are too many!

Imagine now that your application is running on a virtual datacenter where you have even less information of its carbon impact. We still need to start somewhere, though, and set an amount of carbon usage for the application. This will be useful to measure its increase and decrease to drive efficiency.

Back to chaos engineering. Simulating power outages is just a start. We can think of it as the starting point for a sustainable application:

  • What if the renewable power sources are suddenly unavailable and therefore, I have spikes of energy consumption that I could not foresee even in the greenest application?
  • What if at some point my application has become a “carbon monster,” greedy with energy because a query has gone wrong and it’s suddenly taking most of its energy just to search for that item in your cart? Or because at some point the network path has changed due to an outage in the network route and its latency spikes? And so, trying to replicate real-life energy events directly into an application will make it more resilient to lower energy availability and overall, more efficient.

Enter the “Carbon Monkey”

This concept is a “carbon” monkey: a process or system that triggers energy inefficiencies at random, testing how your application reacts, and measuring differential performance that can relate to the differential carbon impact.

Instead of measuring how much energy an application consumes, we should test adding energy events to see how the application behaves and then drive change to improve its reaction to events that make it less green. 

We have given the problem of how to measure an application’s carbon efficiency a lot of thought. But this approach offers a change of perspective. Instead of measuring how much energy an application consumes, we should test adding energy events to see how the application behaves and then drive change to improve its reaction to events that make it less green. 

As a result, we won’t have a carbon impact exact measurement, but only a differential. With time, this differential can become an absolute number when  other systems allow us to retrieve more precise energy consumption metrics.  In the meanwhile, let the carbon monkey help us reduce impact regardless of the metric standardization!

Photo by Alexandr Podvalny on Pexels.com

Call for more “Carbon Monkeys”

I’d like to see developer communities creating one or more “carbon monkeys” that can introduce energy-impacting events into applications, to foster resiliency towards sustainability. 

The main trigger is defining a set of incorrect assumptions about energy usage that can prevent our application from performing “green”. These would include assumptions such as the highest energy cost/carbon use/region, the shortest/longest queries, the shortest/longest network paths, the highest compute and memory usage among other things. 

These assumptions should then be introduced by an automated process (our monkey) that will make sure that the application patterns are resilient enough to overcome those issues without completely failing. At the end of the run, we could set up a carbon resiliency value that can help set a standard for the application carbon impact differential evaluation.

Originally published in the Microsoft Developer Blogs

Sustainable cloud native software with serverless architectures

Living in Milan, I have had to deal with extraordinary air pollution values since December 2019, in some days controversial graphs compared Milan to much more densely populated and polluted cities, in China and India, at least by common perception.

Then came covid-19, and obviously our concerns moved elsewhere. Like everyone, at least in Lombardy, I was in lockdown from 21 February to 4 May. In the midst of a thousand worries, a little voice in the back of my head continued to point out that, however, suddenly, the air was no longer polluted, the CO2 levels had dropped significantly, which in short meant that an important change and with impactful results was, indeed, possible.

Fast forward to now … do we want to go back to the impossibly polluted air of January 2020? If the answer is no, then something needs to change.

First, let’s see why a change is due and important. The whole scientific community agrees that the world has a pollution problem. Carbon dioxide in our atmosphere has created a layer of gas that traps heat and changes the earth’s climate. Earth’s temperature has risen by more than one degree centigrade since the industrial revolution of the 1700s.

If we don’t stop this global warming process, scientists tell us that the results will be catastrophic:

  • Further increase in temperature
  • Extreme weather conditions, drought, fires (remember the Australian situation at the beginning of the year?)
  • The rising of the waters could make areas where more than two hundred million people live uninhabitable
  • The drought will necessarily lead to a food shortage, which can impact over 1 billion people.

To summarize, we must drastically reduce CO2 emissions and prevent the temperature from rising above 1.5°C.

Problem. Every year the world produces and releases more than 50 billion gas into the atmosphere.

CO2 emissions are classified into three categories:

Scope 1 – direct emissions created by our activities.

Scope 2 – indirect emissions that come from the production of electricity or heat, such as traditional energy sources that power and heat our homes or company offices.

Scope 3 – indirect emissions that come from all other daily activities. For a company, these sources are several and must include the entire supply chain, the materials used, the travel of its employees, the entire production cycle.

When we speak of “carbon efficiency” we know that greenhouse gases are not made up only of carbon dioxide, and they do not all have the same impact on the environment. For example, 1 ton of methane has the same heating effect as 80 tons of carbon dioxide, therefore the convention used is to normalize everything to the CO2-equivalent measure.

International climate agreements have ratified to reduce “carbon” pollution and stabilize the temperature at a 1.5°C increase by 2100.

Second problem. The increase in temperature does not depend on the rate at which we emit carbon, but on the total quantity present in the atmosphere. To stop the rise in temperature, we must therefore avoid adding to the existing, or, as they say, reaching the zero-emission target. Of course, to continue living on earth, this means that for every gram of carbon emitted, we must subtract as much.

Solution to both problems: emissions must be reduced by 45% by 2030, and zero emissions by 2050.

Let’s now talk about what happens with datacenters, and in this specific case,  public cloud datacenters.

  • The demand for compute power is growing faster than ever.
  • Some estimates indicate that data center energy consumption will account for no less than a fifth of global electricity by 2025.
  • A server/VM operates on average at 20-25% of its processing capacity, while consuming a lot of unused energy.
  • On the other hand, in an instance where applications are run using physical hardware, it is still necessary to keep servers running and use resources regardless of whether an application is running or not.
  • Containers have a higher density and can bring a server/VM up to 60% of use of compute capacity.
  • Ultimately, it is estimated that 75-80% of the world’s server capacity is just sitting idle.

While browsing for solutions, I found very little documentation and formal statements about sustainable software engineering. While talking to fellow Microsoft colleague Asim Hussain, I found out that there is a “green-software” movement, which started with the principles.green website, where a community of developers and advocates is trying to create guidelines for writing environmentally sustainable code, so that the applications we work with every day are not only efficient and fast, but also economic and with an eye to the environment. The eight principles are:

  1. Carbon. First, the first step is to have the environmental efficiency of an application as a general target. It seems trivial but to date there is not much documentation about it in computer textbooks or websites.
  2. Electricity. Most of the electricity is produced from fossil fuels and is responsible for 49% of the CO2 emitted into the atmosphere. All software consumes electricity to run, from the app on the smartphone to the machine learning models that run in the cloud data centers. Developers generally don’t have to worry about these things: the part of electricity consumption is usually defined as “someone else’s problem”. But a sustainable application must take charge of the electricity consumed and be designed to consume as little as possible.
  3. Carbon intensity. The carbon intensity is the measure of how many CO2equivalent emissions are produced per kilowatt-hour of electricity consumed. Electricity is produced from a variety of sources each with different emissions, in different places and at different times of the day, and most of all, when it is produced in excess, we have no way of storing it. We have clean sources as wind, solar, hydroelectric, but other sources such as power plants have different degrees of emissions depending on the material used to produce energy. If we could connect the computer directly to a wind farm, the computer would have a zero-carbon intensity. Instead we connect it to the power outlet, which receives energy from different sources and therefore we must digest the fact that our carbon intensity is still always a number greater than zero.
  4. Embedded or embodied carbon is the amount of pollution emitted during the creation and disposal of a device. So efficient applications that run on older hardware also have an impact on emissions.
  5. Energy Proportionality. The maximum rate of server utilization must always be the primary objective. In general, in the public cloud this also equates to cost optimization. The most efficient approach is to run an application on as few servers as possible and with the highest utilization rate.
  6. Networking. Reducing the amount of data and the distance it must travel across the network also has its impact on the environment. Optimizing the route of network packages is as important as reducing the use of the servers. Networking emissions depend on many variables: the distance crossed, the number of hops between network devices, the efficiency of the devices, the carbon intensity of the region where and when the data is transmitted.
  7. Demand shifting and demand shaping. Instead of designing the offer based on demand, a green application draws demand based on the energy supply. Demand shifting involves moving some workloads to regions and at times with lower carbon intensity. Demand shaping, on the other hand, involves separating the workloads so that they are independently scalable, and prioritizing them to support the features based on energy consumption. When the energy supply is low, therefore the carbon intensity is at that time higher than a specific threshold, the application reduces the number of features to a minimum, keeping the essential. Users can also be involved in the choice by presenting the “green” option with a minimum set of features.
  8. Monitoring and optimization. Energy efficiency must be measured in all parts of the application to understand how to optimize it. Does it make any sense to spend two weeks reducing network communication by a few megabytes when a db query has ten times the impact on emissions?

The principles are generic for any type of application and architecture, but what about serverless?

Serverless applications are natively prone to the optimization of emissions. Since the same application at different times consumes differently depending on the place of execution, demand shifting is a technique that can be easily applied to serverless architectures. Of course, with serverless we have no control over the infrastructure used, we must trust that cloud providers want to use their servers at 100% capacity. 😊

Cost optimization is generally also an indication of sustainability, and with serverless, we can have a direct impact on execution times, on the network data transport, and in general on building efficient applications not only in terms of times and costs, but also of emissions.

The use of serverless brings measurable benefits:

  • The use of the serverless allows for a more efficient use of the underlying servers, because they are managed in shared mode by the cloud providers, and built for an efficient use of energy for optimal data center temperature and power.
  • In general, cloud datacenters have strict rules and often have ambitious targets for emissions (for instance, Microsoft recently declared its will as a company to become carbon negative by 2030). Making the best use of the most optimized resources of a public cloud provider implicitly means optimizing the emissions of your application.
  • Since serverless only uses on-demand resources, the server density is the highest possible.
  • Serverless workloads are ready for demand-shifting / shaping executions.
  • From a purely theoretical point of view, writing optimized and efficient code is always a good rule of thumb, regardless of the purpose for which you do it 😊

Developers can immediately have an IMPACT on application sustainability:

  • By making a program more accessible to older computers.
  • By writing code that exchanges less data, has a better user experience and is more environmentally friendly.
  • If two or more microservices are highly coupled, by considering co-locating them to reduce network congestion and latency.
  • By considering running resource-intensive microservices in a region with less carbon intensity.
  • By optimizing the database and how data is stored, thus reducing the energy to run the database and reducing idle times, pending completion of queries.
  • In many cases, web applications are designed by default with very low latency expectations: a response to a request should occur immediately or as soon as possible. However, this may limit the sustainability options. By evaluating how the application is used and whether latency limits can be eased in some areas, reducing further emissions can be possible.

In conclusion, I am convinced that serverless architectures, where properly used, are the future not only because they are beautiful, practical and inexpensive, but also because they are the developer tools that today have the least impact on emissions. With the help of the community, we can create specific guidelines for the serverless and maybe even an “carbon meter” of our serverless application, which in the future could also become “low-carbon certified”.

COVID-19 was an inspiring moment in terms of what we managed to do on a global level: all the countries stopped, all the flights, the traffic, the non-essential production. We know that something can be done and that this is the right time to act: rebuilding everything from scratch, it is worth rebuilding in the right direction.

AI and CX?

From Mobile World Congress 2016 to the recent F8 ten year roadmap speech, AI is definitely one of the hottest technology trends. And specifically, AI in the customer experience, which  is the front-line of any expectation towards a company has been buzzing for a while as an innovation topic.

News from the several AI experiments are not much reassuring: Tay’s Meltdown proved once again what my university professor would say of computer science, “garbage in- garbage out”.

So on one side we would love to have computers help us with our CX, but on the other it looks really risky as any AI exposed to the public can be manipulated to reflect bad, racist or inappropriate responses to apparently innocent questions, sometimes just for the sake of it, others because of a specific sabotage schemed to bring it down.

Within Customer Experience, the relationship with automation has always been controversial. Would a customer like to be served by a robot and to what extent? Why would a company want to invest significant amounts of time and money to expose its front-line and most visible asset to malpractice and gruesome attacks from trolls and hackers?

The problem of any AI is, obviously, the learning. So probably the mistake from Microsoft was to trust the public network to be truthful and honest when teaching conversational skills to its bot.

Having worked in the customer experience realm for many years, I would never trust a bot to learn from public behaviour over social media: imagine your new hire agent sent to learn conversational skills and empathy…in the street?

but….on the other hand, I know that these guys (the CX teams) are literally sitting on a pile of interaction recordings that are rarely used, unless for some sparse quality management or compliance regulation. So why not use this big data to teach an AI, in a controlled, business-like though still real-life environment, how a conversation about your own brand or product should evolve? This idea might not be new but I haven’t seen anyone even testing it yet. Probably the biggest refrain is that AI projects are still in an experimental phase, are very expensive and bring little certainty of results.

But think about this: if you could have your new hire listen to thousands of hours of work conversations to learn how to address issues, how to talk to customers, how to properly escalate, how to behave in the interaction realm, and all in the business language of your own brand and company! This would be impossible for any human being, but for a bot…well, no big deal.

And the result: a perfectly trained agent ready to respond to your most difficult inquiries like your best skilled agent. Also, because every contact centre is different from the other, their recordings will result in different and more accurate learning and behaviour of the same AI. Isn’t AI in such case a dream come true?

As consumers, we probably would not care that the responses come from a bot, especially with digital channels where there is no voice and tracking a bot might be really tricky, and in the end, what matters most is the CX perception, not the reality. 🙂