Learn how developers can quickly set up call control directing callers by voice. The twist? We add sentiment analysis from Symbl.ai to detect if the caller is cranky and redirect to live person immediately. Find out how it is done and what else Telnyx is up to!
The COVID-19 global pandemic changed how we live, work, and shop. It has changed consumer’s behavior and shopping patterns and has caused a widespread disruption the world over. From detection to prevention, remote working to virtual gatherings, and online shopping to streaming content, AI is playing a pivotal role in our day-to-day lives even during the pandemic. This talk covers various AI technologies that are helping us deal with life during and after this pandemic.
This talk will provide a brief overview of some recent research on incorporating safety into advanced driver-assistance systems (ADAS) in self-driving cars for several driving scenarios such as lane keeping and during tire blowouts. Specifically, we will introduce tire blowout modeling and background, and present our research on safety control based on control barrier functions.
The DeFi market continued to push new all-time highs every month. The launch of the first Bitcoin Futures ETF came to fruition and paved the way for broader adoption. The number of DeFi wallets continues to grow and has reached new all-time highs with 3.7 million unique addresses this month. Undoubtedly composability is enabling rapid innovation in various areas loan, trade, borrow, exchange, etc. Many traditional finance organizations too are focusing on transforming services and creating investment opportunities in CeFi (centralized finance) approach. This talk covers the evolution of DeFi, current and future trends in this space, demo at the end to showcase the protentional of DeFi apps.
Convolutional neural networks (CNN) have been shown to be extremely powerful for object recognition tasks. This remarkable success is at the expense of a high computational cost of these networks, making them unsuitable for embedded scenarios (e.g., on-camera video surveillance). The vision-based intersection management (vIM) of CAVs is one of the emerging applications which will become an essential part of cities. A study conducted by American Automobile Association (AAA) shows more than two people are killed every day in the U.S. due to accidents caused by red-light runners.
Traffic light planning is complex already without accounting for pedestrians. Pedestrians congregating at an intersection can cause safety issues for all vehicles and people. Accurately determining the counts of pedestrians, vehicles, and bicycles is difficult for existing technology in the market. Computer vision allows for the simple integration of counting people, cars, and bikes and removes concerns that big brother is watching. Additional analysis can be performed by providing count data – if pedestrian counts are not changing, there may be a need to modify traffic patterns or alert first responders. Starting with simple, actionable data is the first step in this process and will benefit the overall safety of the nation’s transportation system and ultimately strength the economy of the nation at large.
We face two main challenges in vIM: 1) The processing unit needs to be at the intersection; using cloud computing is not feasible due to bandwidth usage and delay. 2) In remote regions, vIM needs to be powered by solar panels, limiting the management unit’s available energy.
Object recognition is the most energy and computational demanding module in vIM. A new set of recognition models are proposed based on modeling the temporal relationships between frames to address the aforementioned problem. These models rely on contextual cues and memory to supplement their understanding of the environment. Although this set of object recognition models can reduce the computation cost, their performance relies on extracting deep features on a few keyframes. The keyframe selection highly depends on how often significant scene change happens. The keyframe mechanism becomes the impeding factor preventing the model deployment on embedded devices as the processing time selecting and feature extracting the keyframes hurts the system’s response time. An adaptive and region-scale knowledge distillation framework can address this issue. This hardware-friendly framework provides fast and energy-efficient object recognition systems on embedded devices by addressing two major challenges as follows.
What’s the real-world problem?
Autonomous vehicles (AVs) are reported to be overly conservative in that they tend to be easily cut off by human drivers at intersections, highway ramps, and roundabouts. These incidents are often caused by the AVs’ lack of understanding of human signaling (e.g., mild braking to offer right of way), rather than human bullying.
Why is this problem important?
The lack of mutual understanding between humans and AVs has created a public distrust of autonomous driving technologies.
How do we address this problem?
In this talk, we provide a game-theoretic reasoning of such incidents and discuss potential solutions. We will start by introducing the concept of incomplete-information games which underpins human-robot interactions, and explain that understanding how players in such games update their beliefs about others’ intent is critical for their decision making. We will then distinguish between two types of belief update schemes, namely, empathetic and non-empathetic. Informally, an empathetic agent acknowledges the fact that others do not have full knowledge about its own intent, while a non-empathetic agent assumes otherwise. Lastly, we will show that during two-vehicle interactions at uncontrolled intersections, the lack of empathy can cause the AV to falsely believe that the human driver is competitive and to be cut off, reproducing real-world incidents. The consideration of the incomplete-information nature of vehicle interactions differentiates our work from the previous state-of-the-art of research in human-robot interactions. We will briefly discuss the consequent challenges towards bettering machine’s understanding of human belief dynamics from both game theory and machine learning perspectives.
This talk is partially supported by an NSF National Robotics Initiative Grant (NRI:FND 1925403) and an Amazon AWS Machine Learning Research Award (MLRA). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding entities.
3D Printing has come a long way since the RepRap project started almost 20 years ago. It’s still not at the “buy a 3D printer from OfficeMax” level but anyone with some willingness to tinker can get started.
In this session we will start from the design process, thorough to slicing and then printing and troubleshooting. We will have tips on what printer to buy, some of the features you should look for, and why a $200 printer might just end with you giving up instead of finding a new passion. By the end of the talk we’ll have a print going on one of the printers we brought along.
Consume just about any Sci-fi content and you’ll come away with the idea that artificial intelligence is destined to destroy all human life. The reality is artificial intelligence all around us is making us better humans. I’ll share examples from my own life and across multiple industries of bots/AI creating safety nets, suggestions for improvement and even happiness. Let’s build the future.
This isn’t a technical talk. We won’t discuss training models (except to remove bias) or write code. Instead, I want to inspire everyone to build a better humanity by uncovering problems that this type of technology can solve.
In recent years, algorithms for AutoML and neural architecture search have automatically found architectures that outperform the best human-designed architectures. In this talk, I will give a technical dive into recent advances in AutoML, including performance prediction techniques, learning curve extrapolation techniques, and algorithms for neural architecture search.
I will go over my journey with Tesla, Autopilot, FSD and how software is eating the auto industry. You will get to find out if it’s really a big deal or if this is all just hype from Elon Musk.
Applications of Computer Vision using deep learning models in conjunction with V2X technology to help improve roadway safety, and provide key data points for smart cities.