Whatever your preference, in today’s image-conscious social media world people generally want to look the best they can, and that’s why SenseTime, the world’s most highly-valued artificial intelligence (AI) start-up, is offering a filter for smartphone cameras and live-streaming apps that can automatically touch you up.
Four days after admitting that it continues to track users even after the Location History tracking has been disabled, Google has updated its website to more accurately reflect the nature of its location policy.
Apple will launch a refreshed entry-level MacBook next month, according to a report, with an updated model claimed will be revealed during the company’s September event alongside new iPhones and other product announcements.
According to a Google executive, the company is backing out of the Project Maven contract that caused an uproar among its employees.
A huge backlash ensued when news broke out that Google has partnered with the US Military to provide AI expertise. Now, according to reports, the tech giant is not renewing its contract with the Pentagon for next year.
Gizmodo’s sources say that Google Cloud CEO Diane Greene announced the decision not to renew the contract at a meeting with employees Friday morning. Green also explained that the the backlash against the firm’s involvement in the project had been terrible for the company. The decision also comes before Google unveils its planned new ethical principles about its use of AI next week.
The project was initially dubbed Project Maven. It grew out of the Pentagon’s “Algorithmic Warfare Cross-Functional Team”. It’s focus was on working together with the US military to improve image analysis of sensitive footage.
However, employees of the company called them out for being a part of warfare. They claim that Google is not standing by its motto of “Don’t do evil”. Thousands of employees started a protest and some even handed out their resignations.
Google tried to get out of the hot water by stating that it was only a “minor project”. However, it was later revealed that it was in fact the opposite. It was Google’s “golden opportunity” and stepping stone to more lucrative military contracts. This includes being awarded a $10 billion cloud computing contract that Google is reportedly competing for.
Reports also claim that the project would have helped the company fast-track its security clearance.
Gizmodo’s sources say that Google Cloud CEO Diane Greene announced the decision not to renew the contract at a meeting with employees Friday morning. Green also explained that the the backlash against the firm’s involvement in the project had been terrible for the company.
Green said that Google is the forefront of the conversation about the ethical use of artificial intelligence. “It is incumbent on us to show leadership,” Greene said, according to a source.
Want to be more involved with the selection of what we report? Or buy us a coffee, maybe?
Be one of our patrons and help keep this place alive with the latest and trendiest tech, games and gadget news.
It doesn’t take a lot but it will help us loads. Click here for a SHORT explanation.
(We only give cool stuff. Pinky swear!)
To bolster the company’s productivity in using machine learning, Emergence Capital has announced that it raised its investment portfolio for companies. The company has a ready investment of $435 million funds to invest in various companies.
Emergence Capital raises its fund for AI projects
Emergence Capital’s investment funds would focus on companies that would provide coaching powered by data and conversational AI. The purpose of which is to help people perform their jobs better. In the past, Emergence has made some similar investments. This includes call centre analysis company Chorus.ai; Myaa as well as Textio.
These aforementioned companies, according to Emergence Capital, are now using conversational AI to fast track their recruitment messages especially for companies they are hiring.
“Any domain that you and I now spend our time in every day will in the future have a coaching network company that owns that domain. That’s where the world is headed, in our opinion,” cofounder and general partner Gordon Ritter said as quoted saying in a VentureBeat in an interview.
Online reports claim that the $435 million fund is actually Emergence Capital’s fifth fund. Back in 2015, the company had raised a $335 million fund intended for investments in enterprise startups focusing on mobile tech, and cloud infrastructure.
The company has also invested in Salesforce, Box, Zoom, and Service Box. Founded in 2003, Emergence Capital is headquartered in San Francisco.
Software giant Microsoft has announced today that it acquired Semantic Machines. The acquisition of the AI start-up company was meant to bolster the company’s AI offerings. This includes the AI products like Cortana, the Azure Bot Service, and Microsoft Cognitive Services, among others.
Microsoft buys Semantic Machines
According to Microsoft, Semantic Machines has extensive experience in working speech synthesis, deep learning, and natural language processing. The company actually works creatively in producing creative AI.
They claim that their products aid machines “to communicate, collaborate, understand our goals, and accomplish tasks.” Now with Microsoft acquired Semantic Machines, you can expect that the company would help Microsoft compete with conversational computing initiatives.
This as Microsoft is competing for other computing initiatives from other big tech companies like Amazon’s Alexa, Apple’s Siri, Google’s Assistant, and Samsung’s Bixby, among others. Also, the AI start-up company has also assembled a group of experts in the conversational AI arena.
This cadre of experts includes Larry Gillick, the former chief scientist for Siri at Apple and researchers UC Berkeley professor Dan Klein as well as Stanford University professor Percy Liang.
“With the acquisition of Semantic Machines, we will establish a conversational AI center of excellence in Berkeley to push forward the boundaries of what is possible in language interfaces,” Microsoft AI and research CTO David Ku said in a blog post.
In case you don’t know, Semantic Machines was founded in August 2014. In the same year, investors poured in $8.5 million of funds to the company. It also received another $12.3 million in December 2015.
At this year’s Google annual I/O developer conference, CEO Sundar Pichai unveiled a new technology called ‘Duplex’. This new technology enables the company’s Google Assistant to interact with humans by doing phone calls in real-time.
What is Google Duplex?
Not only that, with Duplex, Google Assistant could also book a hair appointment and reserve a table for you at your favorite restaurant, among other things.
Pichai said that Duplex is one of the many tools the search giant could make it easier than ever for you to interact with your smart devices. Duplex is actually another addition that would be released alongside with the latest Android mobile operating system, Android P.
On stage at the I/O Conference, Pichai demonstrated the Duplex technology which was held in Mountain View, California. The event started last Tuesday and ran through Thursday. In a demo, Google Assistant dials up a local hair salon to schedule an appointment.
Pichai pointed out that the demo was a real call using Google Assistant. “The amazing thing is that Assistant can actually understand the nuances of conversation,” he said. “We’ve been working on this technology for many years. It’s called Google Duplex.”
Then the Google’s chief executive said Duplex is still under development. The search giant plans to conduct early testing of Duplex inside Assistant this summer “to help users make restaurant reservations, schedule hair salon appointments, and get holiday hours over the phone.”
What are some of the things you need to know about the Google Duplex. Its purpose is to make calls on our behalf. The Google Duplex is there to conduct conversations in a natural sounding and flowing manner and perform real world tasks.
About Google Duplex
It is supposed to sound like a human and sound naturally. Google Duplex was even mentioned in relation to “The Turing Test” which was developed by Alan Turing in the 50’s. This test is used to determine if the Artificial Intelligence’s behavior is indistinguishable from that of a human.
It just sounds eerie talking to AI and mistaking it to be human. But that is Google’s objective. To “fool” humans to thinking that they are conversing with a human on the other end of the line. This task is made possible, thanks to its advances in understanding, interacting, timing, and speaking which ensures that the recipients do not have to adapt to talking to a machine.
This ability to fool the person at the other end of the line that prompted philosophical and ethical concerns. Google has responded to this concern by stating that Google Duplex would appropriately identify itself during conversations.
Speaking And Listening Like Us
Google has used speech disfluencies to create breaks during the conversation. These disfluencies make the speech produced by Google Duplex more natural sounding. Understanding human responses is even more challenging as we use complex sentences, which are sometime contradictory.
Google Duplex As Part Of Our Lives
Google Duplex has to get “sounding like a human” right for it to become part of our daily lives, for us and our businesses. It is self-monitoring. In the event of a task that it cannot complete autonomously, it signals a human operator who will complete the task.
Google is a master conjurer of what’s innovative and in the recent developer’s conference, it showed its mettle with Google Duplex, a new technology that can conduct “natural” conversation over the phone through Google Assistant. It is designed to conduct mundane tasks like setting up appointments and inquiring about prices – something that it did too well causing an uncomfortable stir since it sounded so human. According to the Verge, this personal assistant will now have a built-in disclosure identifying itself as AI before engaging in a conversation with a human. Using WaveNet, an audio-generating technology from DeepMind, Duplex also uses advances in language processing to understand and generate natural speech. In contrast to what we have gotten accustomed to, the conversation is not stilted and it does need adjusting to.
But before we jump the gun on Google and declare that Duplex will grab all Front desk jobs in the future, note that its expertise is limited to small real-world tasks that it needs to be deeply trained on. For now, Duplex can carry on limited talks convincingly but is not suitable for lengthy conversations. Duplex, a fully automated system can initiate calls and receive them in a variety of voices. Incredible as it sounds, the voice you hear is computer generated even if the accent, context, syntax, and pauses are humanlike. The audio files below where an appointment to a hair salon and restaurant are made are from the Google Blog. To hear is to believe:
“Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone” — Yaniv Leviathan, Principal Engineer & Yossi Matias, Vice President, Engineering, Google
Longer conversations between someone who is not too familiar with the booking system in the salon or the menu in the restaurant are challenging, if at all feasible. Natural sounding syntax, intonation, and meaningful pauses are extremely difficult if the level of familiarity is low. These are deemed complex conversations and while it may sound “human-like”, the contextual responses or nuances are not up to par.
According to Google Blog, they have yet to fully master interruptions, elaborations, synchs, and pauses but it is relying on advances in Google’s automatic speech recognition (ASR) technology, the recurrent neural network (RNN) and TensorFlow Extended (TFX) to improve “understanding, interacting, timing, and speaking”. Meaningful conversation is a result of a sequence of processes:
The ASR processes incoming sound.
The text that is produced is run against the context and other inputs.
The response text is created.
The TTS (text to speech) system reads the response aloud.
According to Google, this will greatly help businesses because information and the appropriate reply is available 24/7. There will also be “downtime”, something that can be considerable and expensive when training and nesting Frontliners. From the user-end, you can book, search and get information asynchronously, effortlessly, and in the background. How soon can Google’s deep-learning and AI get this mainstreamed and threaten customer service jobs globally? Hopefully, it can make life easier but not smarter than humans.
Your Mobile Can Play AI-Powered Emoji Scavenger Hunt
Google has developed a new AI-powered emoji scavenger hunt that you can play on your mobile phones. You can try this fun emoji scavenger hunt by opening emojiscavengerhunt.withgoogle.com and see how far you can get.
This may be an indication of Google’s future plans to use artificial intelligence on your mobile phones.
Time and again, Google has released fun little games for us. This time it is back with another entertaining game that is based on artificial intelligence and emoji.
The Emoji Scavenger Hunt
To play this game, you have to use your mobile phone’s camera to look for and find objects that match the emoji within the time limit. Every time you find an emoji, your time increases.
Its object recognition feature may not be that perfect though, because it refuses to recognize some items. For example, it recognized a dish towel stand as a scarf. Meaning, there is a possibility that it may mistakenly identify objects, since its object recognition is not that perfect.
But despite of the game’s little imperfections, it is fun in its simple way, and playing the game gives us a first-hand experience on how artificial intelligence works. Besides, it will probably help us understand and appreciate AI better, and accept its future role in our lives.
AI features like object recognition are finding their way through our every day lives. Developers are finding ways to develop new devices with artificial intelligence.
Other AI Developments From Google
This coming week is Google’s I/O developer conference. The company is expected to have some Artificial Intelligence news to share. There is a rumored update planned for Google Lens. More might probably be heard about the company’s cloud offerings.Google gave users a glimpse of how far the development of its natural language processing, which deals with machine reading comprehension.
Google’s Semantic Experiences
The Google Research division has rolled out Semantic Experiences. These are websites with interesting activities which demonstrate AI’s ability to understand how we speak. They have two experiences to enjoy, and the third one is is for developers to help them create their own experience.
The first two experiences are called “Talk to Books”, in which users can explore a new way to interact with books. The third is “Semantris” where people can play word association games powered by semantic research. Google trained its AI by feeding it a “billion conversation-like pairs of sentences”, so it can learn to identify what a good response looks like.
What The Future May Have In Store
Google is finding and developing ways to satisfy its users. Among these is developing Artificial Intelligence. The company is using it for your mobile phones now. They may be starting with just this simple, fun game like the emoji scavenger hunt. And its developers have one thing in mind: to give us the best.
The sooner we come to terms that we can’t compete with machines over high volume, high-frequency tasks, and repetitive jobs, the sooner we can prepare for, wage, and win a guerilla-waged war over disruptive technology that threatens jobs – yours and mine, included. Unlike in the Industrial age where machines took over blue-collar workers performing automated tasks, today’s technology, far advanced than anyone could have ever imagined has exploded to an extent where even white collar jobs are encroached upon.
Already, many niches are feeling the pinch, even as the likes of Elon Musk, Bill Gates, and Warren Buffet have the supreme faith that humans will prevail. Even though the fictional “Skynet” is still far away, it is an undeniable fact that AI and robots have progressed so much over the last ten years changing the picture of work in the near future. We think nothing of talking to Siri or Alexa, give chatbots privileged information, and allow self-driving cars to bring us safely to our destination. Currently, IBM’s Watson Explorer is equipped with “technology that can think like a human…it can analyze and interpret all data including unstructured text, images, audio, and video.”
The jobs of tomorrow where as much as 45% of current jobs could disappear in the next ten years should be seen in the light of current trends that force corporations to work differently like:
AI, Big Data, IoT, VR, Collaboration Platforms, Cloud Computing, Machine-Learning, Mobile Teams
exponential organizations that require less but better-skilled, tech-savvy employees
changing skillsets like coding, testing, and technical marketing
globalization and mobility allowing diverse and geographically separated teams to work together in real time
the rise of start-ups that leverage technology manned by digital natives: the Millennials and
changing business behavior as far as researching, data-mining, communicating and collaborating are concerned.
“In 1990 GM and Chrysler, brought in $36B in revenue and hired over a million workers. Now the Big Three are Apple, Facebook and Google and they bring in over a Trillion dollars in revenue but they have only 137,000 workers.” — Kim Komando
But there is an aspect where humans excel over machines. While machines have to learn from volumes of data, humans do not require that for higher-order thinking – even in novel situations. To predict whether a job can eventually be automated rests on whether the tasks performed or problems solved can be reduced to repetitive ones. That being said, we see jobs involving transcription, encoding, customer service, paralegal services, translation, retail services pretty much on the brink of extinction. In fact, when you go online to chat regarding customer issues, it’s pretty hard to tell if you are talking to a chatbot or to a live person. To be fair, even writers and editors are also technology-assisted but not at the precipice, at least not yet – saved perhaps by the value of quick ferreting of information, integrating data, and drawing conclusions from the perspective of human mind and emotions. We still see humans mapping out strategies, running campaigns, diagnosing rare diseases and interpreting complex taxation or legalities.
Technology has indeed enabled us to work differently – more engaged and productive. For as long as we stay ahead, we get to rule machines. It is perhaps the very same scenario that has not made bank workers obsolete despite the ubiquity of ATMs. In fact, Elon Musk has turned to humans to meet the production demands of Tesla 3. With the ongoing war for talent, you get the upper hand if you continuously train and evolve to stay relevant. What are the must-have “know-how” for the imminent future? Talking tech is not just savvy; it also assures your job survival. Embrace change by training to be functional (as opposed to just a kibitzer) in coding, API, behavioral psychology, user experience and the like. You don’t have to be a millennial to compete in their turf; you just need to be in tune with anything and everything digital. In today’s jungle, curiosity doesn’t kill the cat – it makes him the Lion King!