Gratiswebinar om spennende tema

Kunnskapstørst for AI, sensorfusjon, nevromorf prosessering, AI for autonome kjøretøy, og industrielle kamera? Her er noen gratis webinar om emnene. 

Publisert

Denne artikkelen er 2 år eller eldre

Det er bransjeorganisasjonen Edge AI and Vision Alliance som nå inviterer til en rekke gratis webinarer om spennende tema som kunstig intelligens (AI), sensorfusjon, nevromorf prosessering, AI for autonome kjøretøy, og industrielle kamera. 

Noen av dem er tilgjengelige på spesifikke datoer fremover, mens andre er mulig å se allerede nå. Her er mange godbiter:

Sensor Fusion for Autonomous Vehicles
December 15, 2020, 9 am Pacific Time

ADAS systems in vehicles have proven to reduce road fatalities, alert the driver to potential problems and avoid collisions. The recent availability of more powerful computing chips and sensors has enabled the development of even more advanced functions, expanding beyond safety assistance to incorporate increasingly automated driving capabilities. The implementation of these autonomous features requires the use of more sensors, more computing power and a more complex electric/electronic (E/E) system architecture.
In this presentation, Pierrick Boulay from Yole Développement will describe the increasing need for, along with the “fusion” coordination among, sensors for autonomous devices for both automotive and industrial applications. The presentation will cover topics such as cameras, radar, LiDAR, E/E architectures and domain controllers.

Register Now

Power-efficient Edge AI Applications through Neuromorphic Processing
December 17, 2020, 9 am Pacific Time

Many edge AI processors take advantage of the spatial sparsity in neural network models to eliminate unnecessary computations and save power. Neuromorphic processors achieve further savings by performing event-based computation, which exploits the temporal sparsity inherent in data generated by audio, vision, olfactory, lidar, and other edge sensors. This presentation from Anil Mankar of BrainChip will provide an update on the AKD1000, BrainChip’s first neural network SoC, and describe the advantages of processing information in the event domain.

Register Now

The Role of Neural Network Acceleration in Automotive
January 12, 2021, 9 am Pacific Time

Industry research suggests that demand for ADAS will triple by around 2027. In addition, the automotive industry is already looking beyond this to full self-driving cars and robotaxis. Neural networks are fundamental to and will underpin this evolution from Level 2 and 3 ADAS, to full self-driving at Level 4 and Level 5. These systems will have to cope with hundreds of complex scenarios, absorbing data from numerous sensors, such as multiple cameras and LiDAR, for solutions such as automated valet parking, and intersection management and driving safely through complex urban environments.

In this interview session, Jamie Broome, Head of Automotive Business, and Andrew Grant, Senior Director of Artificial Intelligence, both of Imagination Technologies, will share their thoughts and observations on the role of neural network acceleration in the future of automotive, as well as share insights on the company's product line and roadmap.

Register Now

Advancing the AI Processing Architecture for the Software-Defined Car
January 21, 2021, 9 am Pacific Time

Edge AI applications for automotive, such as the intelligent cockpit, ADAS and autonomous driving, have set the bar for one of the biggest technology challenges of our time. While the race toward the ultimate driverless vehicle accelerates, AI applications find their way into today’s vehicles at an increasing rate, thus demanding computing performance with high accuracy, high reliability, energy efficiency and cost effectiveness.
The Horizon Robotics Journey optimized processor series and Matrix computing system, paired with state-of-the-art AI and computer vision software and an efficient AI toolkit, have emerged as one of the most efficient AI-optimized solutions for smart mobility. This webinar will cover the following topics:

  • Horizon Robotics’ hardware architecture and roadmap
  • Vision application examples
  • Selected partnership projects
  • A proposal for metrics that best represent production price, power and performance for comparing AI processing platforms.

Register Now

Adding Embedded Cameras to Your Next Industrial Product Design
February 16, 2021, 9 am Pacific Time

Powerful embedded vision creates new possibilities and added value for many industrial products. In this presentation, Jan-Erik Schmitt from Vision Components will demonstrate multiple ways to integrate camera technology in hardware designs. He will also share the company's know-how and experience, from the invention of the first industrial-grade intelligent camera 25 years ago to state-of-the-art Mipi-Modules, board-level cameras and ready-to-use solutions.

Register Now

Key Trends in the Deployment of Edge AI and Computer Vision

With so much happening in edge AI and computer vision applications and technology, and happening so fast, it can be difficult to see the big picture. This webinar from the Edge AI and Vision Alliance, presented by Jeff Bier, Founder of the Alliance and Co-founder and President of BDTI, examines the four most important trends that are fueling the proliferation of edge AI and vision applications and influencing the future of the industry.
(Two Sessions)

Bier explains what's fueling each of these key trends, and highlights key implications for technology suppliers, solution developers and end-users. He also provides technology and application examples illustrating each of these trends, including spotlighting the winners of the Alliance's yearly Vision Product of the Year Awards. Two webinar sessions are offered.

Watch Now (First Session)

Watch Now (Second Session)

Algorithms, Processors and Tools for Visual AI: Analysis, Insights and Forecasts
(Two Sessions)

Every year since 2015, the Edge AI and Vision Alliance (formerly the Embedded Vision Alliance) has surveyed developers of computer vision-based systems and applications to understand what chips and tools they use to build their visual AI products. The results from our most recent survey, conducted in October 2019, were derived from responses received from more than 700 computer vision developers across a wide range of industries, organizations, geographical locations and job types.
In this webinar, we provide insights into the popular hardware and software platforms being used for vision-enabled end products, derived from survey results. The webinar is presented by Jeff Bier, Founder of the Alliance and Co-founder and President of BDTI. Bier not only shares results from this year's survey but also compares and contrast them with past years' survey data, identifying trends and extrapolating them to future results forecasts. Two webinar sessions are offered.

Watch Now (First Session)

Watch Now (Second Session)

3D Imaging and Sensing: From Enhanced Photography to an Enabling Technology for AR and VR

Beginning in late 2017, Apple brought an innovative use case to mobile devices: by means of a structured light-based 3D sensing camera module that the company built into the front bezels of smartphones (and later, tablets), users were able to rapidly and reliably unlock their devices using only their faces. Android-based mobile device manufacturers have added front depth sensors to their products in response, and are now striving to enable applications that additionally leverage rear-side-mounted depth sensors.

So far, at least, these new applications are predominantly photography-related. In the near future, however, they're expected to further expand into augmented reality, virtual reality and other applications, following in the footsteps of Google's trendsetting Project Tango experiment of a few years ago. And with rumors suggesting that time-of-flight camera modules will also begin appearing on the rear sides of iPhones later this year, it's anyone's guess as to who—Android or iOS—will launch (and achieve widespread success) with this technology first.

In this webinar, market research firm Yole Développement will describe the application roadmap, market value and cost of those highly anticipated mobile 3D sensing modules, including topics such as CMOS image sensors, optical elements and VCSEL illumination. The webinar will be presented by Principal Analyst Pierre Cambou, who has been active in the imaging industry for more than 20 years.

Watch Now

A Computer Architecture Renaissance: Energy-efficient Deep Learning Processors for Machine Vision

The resurgence of research and development activity in computing architectures in recent years is rooted in fundamental concepts which will further drive the industry forward in the coming decades. 60-year-old legacy processing approaches are being pushed aside, making room for new techniques that will empower diverse industries to new frontiers. Deep learning, a dominant discipline among those domains, will lead this renaissance by virtue of the fact that it is well-suited for various perception-related tasks. Welcome to the era of domain-specific architectures. Why this? And why now?

Hailo has developed a specialized deep learning processor that delivers the performance of a data center-class computer to edge devices. Hailo's AI microprocessor is the product of a rethinking of traditional computer architectures, enabling smart devices to perform sophisticated deep learning tasks such as imagery and sensory processing in real time with minimal power consumption, size and cost.

In this webinar, Hailo will navigate through the undercurrents that drove the definition and development of Hailo's AI processor, beginning with the theoretical reasoning behind domain-specific architectures and their implementation in the field of deep learning, specifically for machine vision applications. They will also describe various quantitative measures, presenting detailed design examples in order to make a link between theory and practice.

Watch Now

Delivering Milliwatt AI to the Edge with Ultra-Low Power FPGAs
(Two Sessions)

Products with embedded artificial intelligence (AI) capabilities are increasingly being developed for smart home, smart city, and smart factory applications. Low power FPGAs are proving to be well suited for implementing machine learning inferencing at the edge, given their inherent parallel architecture. By combining ultra-low power, high performance, programmability and comprehensive interfaces support, these low power FPGAs give edge device developers the flexibility they need to address changing design requirements, including the ability to adapt to evolving deep learning algorithms and architectures.

Lattice's iCE40 UltraPlus and ECP5 product families support development of Edge AI solutions that consume anywhere from 1 mW to 1 W on compact hardware platforms. To accelerate development, Lattice has also brought together the award-winning sensAI stack, which gives designers all of the tools they need to develop low power, high performance Edge devices. The first section of this webinar explores the sensAI stack, various system-level architectures to implement smart edge devices, and Lattice's end-to-end reference design flow that simplifies the implementation of target applications, including security and smart cameras, human-to-machine interfacing using voice and gesture, and object identification. The focus then shifts to memory management techniques, quantization and fractional settings used for deployment of small models on low power edge devices. Performance and power metrics are provided for common use cases, and are also compared against other hardware implementation options such as MCUs.

Watch Now (First Session)

Watch Now (Second Session)

Renesas' Dynamically Reconfigurable Processor (DRP) Technology Enables a Hybrid Approach for Embedded Vision Solutions

Higher display resolutions, coupled with faster video frame rates, place greater demands on image processing system performance. The key challenge designers consequently face is how to achieve higher video processing performance within the size, power consumption, and thermal dissipation constraints imposed by the requirements of embedded systems. Traditional approaches for boosting performance, such as "throwing" more CPU cores at the problem or creating custom silicon to implement high-speed data path circuits, are not ideally suited for the rapidly evolving machine vision market.

Renesas' DRP technology, built into select Arm® Cortex-based RZ Family MPUs, accelerates image processing algorithms with runtime-reconfigurable hardware that delivers the acceleration benefits of dedicated circuitry while avoiding the cost and power penalties associated with embedded FPGA-based approaches. The hybrid CPU/DRP architecture of the RZ/A2M MPU combines Renesas' proprietary DRP technology for fast pre-processing of image data and feature extraction, running alongside an Arm® Cortex®-A9 CPU for decision making, to deliver a unique hybrid approach for accelerating machine vision applications. In this webinar, Renesas will explain the DRP architecture and its operation, present benchmarks and design examples demonstrating more than 10x the performance of traditional CPU-only solutions, and introduce resources for developing DRP-based embedded vision systems with the RZ/A2M MPU.

Watch Now

Key Trends in the Deployment of Visual AI
(Two Sessions)

With so much happening in visual AI and computer vision applications and technology, and happening so fast, it can be difficult to see the big picture. This webinar from the Embedded Vision Alliance, presented by Jeff Bier, Founder of the Alliance and Co-founder and President of BDTI, examines the four most important trends that are fueling the proliferation of vision applications and influencing the future of the industry.

Bier explains what's fueling each of these key trends, and highlights key implications for technology suppliers, solution developers and end-users. He also provides technology and application examples illustrating each of these trends, including spotlighting the winners of the Alliance's yearly Vision Product of the Year Awards. Two webinar sessions are offered.

Watch Now (First Session)

Watch Now (Second Session)

Architecting Always-On, Context-Aware, On-Device AI Using Flexible Low-power FPGAs

Driven by concerns such as network bandwidth limitations, privacy and decision latency, interest in on-device Artificial Intelligence (AI) is increasing. Always-on context awareness is a key requirement in devices at the Edge, including mobile, smart home, smart city, and smart factory applications. Many of these Edge devices are battery-operated or have thermal constraints, leading to stringent power, size, and cost limitations. Additionally, the inferencing solution has to be flexible enough to adapt to evolving deep learning algorithms and architectures, including on-device training.

Given this unique mix of requirements for on-device Edge AI, developers need to architect their systems thoughtfully, both at the system level and the chip level. FPGAs are proving to be well suited for implementing machine learning inferencing, given their inherent parallel processing capabilities. In this webinar, Lattice Semiconductor uses its experience in developing always-on, vision-based AI solutions to illustrate these tradeoffs and explore optimizations across implementations ranging from 1mW to 1W.

Watch Now

An Introduction to Developing Vision Applications Using Deep Learning and Google's TensorFlow Framework

This webinar introduces deep learning for vision tasks and Google's TensorFlow framework. It is presented by Pete Warden, Google research engineer and the company's technology lead on the mobile and embedded TensorFlow team. The webinar begins with an overview of deep learning and its use for computer vision tasks. It then introduces Google's TensorFlow as a popular open source framework for deep learning development, training, and deployment, and provides an overview of the resources Google offers to enable you to kick-start your own deep learning project. Warden concludes with several case study design examples that showcase TensorFlow use and optimization on resource-constrained mobile and embedded devices.

Watch Now

Efficient Processing for Deep Learning: Challenges and Opportunities

Deep neural networks (DNNs) are proving very effective for a variety of challenging machine perception tasks. But these algorithms are very computationally demanding. To enable DNNs to be used in practical applications, it’s critical to find efficient ways to implement them. This free hour-long webinar explores how DNNs are being mapped onto today’s processor architectures, and how both DNN algorithms and specialized processors are evolving to enable improved efficiency. It is presented by Dr. Vivienne Sze, Associate Professor in the Electrical Engineering and Computer Science Department at MIT (www.rle.mit.edu/eems). Sze concludes with suggestions on how to evaluate competing processor solutions in order to address your particular application and design requirements.

Watch Now

OpenCV on Zynq: Accelerating 4k60 Dense Optical Flow and Stereo Vision

In this free hour-long webinar, Xilinx presents a new approach that enables designers to unleash the power of FPGAs using hardware-tuned OpenCV libraries, a familiar C/C++ development environment, and readily available hardware development platforms. OpenCV libraries are widely used for algorithm prototyping by many leading technology companies and computer vision researchers. FPGAs can achieve unparalleled compute efficiency on complex algorithms like dense optical flow and stereo vision in only a few watts of power. However, unlocking these capabilities traditionally required hardware design expertise and use of languages like Verilog and VHDL.

Watch Now

Develop Smart Computer Vision Solutions Faster: Presented by Intel

In this free hour-long webinar, Intel shows developers how to innovate by including computer vision and "smart" capabilities in their embedded solutions and applications. Learn how to optimize for high performance and power-efficiency, then integrate visual understanding by accelerating deep learning inference and classical computer vision operations with free expert software tools. Take advantage of an API, algorithms, custom kernels, code samples, and how-to examples that speed your development. Ensure you tap into the full power of Intel® processors that you use today.

Watch Now

Caffe to Zynq: State-of-the-Art Machine Learning Inference Performance in Less Than 5 Watts

In this free hour-long webinar, Xilinx presents a method for easily migrating a CNN running in Caffe to an efficient Zynq-based embedded vision system utilizing Xilinx’s new reVISION software stack. Machine learning research is advancing daily with new network architectures, making it difficult to choose the best CNN algorithm for a particular application. With this rapid rate of change in algorithms, embedded system developers who require high performance and low power consumption are increasingly considering Zync SoCs. Zynq SoCs are ideal for efficient CNN implementation as they allow creation of custom network circuitry in hardware, tuned exactly to the needs of the algorithm. The result is state-of-the-art performance-per-watt that outstrips CPU- and GPU-based embedded systems.

Watch Now

Vision with Precision: Augmented Reality

In this free hour-long webinar, Xilinx and Tractica present the emerging field of Augmented Reality (AR), discussing a number of use cases outside of the more commonly known consumer examples.

Watch Now

Vision with Precision: Medical Imaging

In this free hour-long webinar, Xilinx and its Xilinx Alliance Program member, TOPIC Embedded, present critical challenges facing developers of advanced medical imaging systems.

Watch Now

Learning at the Speed of Sight

Explore deep learning and computer vision technology for embedded systems in this free hour-long webinar from VeriSilicon, which discusses the increased demand for vision processing in a wide range of embedded applications, such as augmented reality, ADAS, autonomous vehicles and other devices, surveillance systems, drones, and IoT products.

Watch Now

A Brief Introduction to Deep Learning for Vision and the Caffe Framework

This free hour-long webinar, presented by the primary Caffe developers from the U.C. Berkeley Vision and Learning Center, begins with an introduction to deep learning and its use for computer vision tasks. It then introduces convolutional neural networks (CNNs) and explains why they have recently emerged as a powerful technique for a wide range of vision tasks. Finally, the webinar introduces the popular Caffe open source framework for CNN development, training, and deployment, and provides an overview of the resources Caffe offers to enable you to kick-start your own CNN project.

Watch Now

Powered by Labrador CMS