For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/transforming-enterprise-intelligence-the-power-of-computer-vision-and-gen-ai-at-the-edge-with-openvino-a-presentation-from-intel/
Leila Sabeti, Americas AI Technical Sales Lead at Intel, presents the “Transforming Enterprise Intelligence: The Power of Computer Vision and Gen AI at the Edge with OpenVINO” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Sabeti focuses on the transformative impact of AI at the edge, highlighting the role of the OpenVINO tool kit in streamlining the AI solution life cycle on Intel hardware. This includes the development of energy-efficient computer vision and generative AI models suitable for edge computing.
Sabeti showcases cutting-edge AI applications, such as multimodal LLMs for document understanding and YOLO object detection for smart retail solutions. She addresses the entire edge compute ecosystem, discussing how to optimize AI processes from training to inference across various computing platforms, including Intel GPUs. Additionally, she explores how businesses can seamlessly transition between edge and cloud environments and how Intel’s portfolio of solutions unlock the advantages of edge computing, such as data protection and AI acceleration.
Increase Operational Efficiency with District Heating and Cooling (DHC) Management System Powered by the CyberVille® IoE / Industrial Internet Application Platform
On the video below you can see Fortum's Suomenoja CHP power plant installation in action: https://www.youtube.com/watch?v=e6upXL-qcG4
Please see also Industrial Internet Consortium (IIC) case study on same topic: http://www.iiconsortium.org/case-studies/Cyberlightning_Fortum_Case_Study.pdf
Leveraging Artificial Intelligence Processing on Edge DevicesICS
The introduction of low-cost, high-performance embedded processors coupled with improvements in Neural Network model optimization lay the foundation for AI and Computer Vision at the edge. Moving intelligence from the cloud to the edge offers many advantages including the reduction of network traffic, predicable ML inference times, and data security to name a few. Challenges exist as many development teams do not have data scientist or AI development engineers. What is needed are practical AI solutions including ML development tools, optimized inference engines and reference platforms that will abstract out the development complexities to stream line prototyping and development.
In this joint webinar with Au-Zone Technologies we will discuss:
- Development challenges and solutions which can be use to enable AI/ML at the edge to implement object detection, classification and tracking for medical and industrial use-cases
- Visualization techniques for activity monitoring and object detection
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/intel-video-ai-box-converging-ai-media-and-computing-in-a-compact-and-open-platform-a-presentation-from-intel/
Richard Chuang, Principal AI Engineer at Intel, presents the “Intel Video AI Box—Converging AI, Media and Computing in a Compact and Open Platform” tutorial at the May 2022 Embedded Vision Summit.
As a system integrator, solution provider or AI developer, you need to run your AI applications efficiently at the edge with sufficient throughput. Does your edge device run either generic computing or deep learning inferencing, but not both? Intel Video AI Box with Core CPU and integrated Xe LP graphics offers a compact solution to run video AI analytics at the edge with the support to orchestrate AI applications and workloads in cloud-to-edge deployments.
In this presentation, you’ll learn about Intel’s new platform, comprising an Intel CPU with integrated graphics and the Edge AI Box for Video Analytics software package, and how it enables developing cutting-edge video solutions faster. Chuang also explores EFLOW enablement on the platform, which allows Windows-based business applications to run rich Linux AI workload containers with Azure cloud connections for scalable deployments.
Bei der Digital Konferenz The Future of AI & Big Data Analytics gibt Julian Fischer, Artificial Intelligence Account Executive bei intel, Einblick in End-to-End AI Acceleration mit Intel.
Julian Fischer ist seit 2018 bei intel. Er steht ein für die Förderung der Digitalisierung auf Basis der Intel-Technologie durch Unterstützung seiner Kunden und Partner bei allen IT-Herausforderungen, einschließlich der Zukunft des Arbeitsplatzes, der künstlichen Intelligenz und der DataCenter-Transformation.
A Dell lança uma peça chave para projetos IoT: o “Edge Gateway”. Vamos apresentá-lo pessoalmente, mencionaremos os demais elementos que a Dell oferece na área e lembraremos todos os elementos, próprios e de terceiros, que devem compor um projeto IoT. Também discutiremos os fatores críticos de sucesso e mostraremos como um projeto IoT pode ser iniciado, com sucesso, hoje mesmo!
A talk on reducing costs & increasing efficiencies by designing, testing & engineering in simulation first, plus examples of robotics & environmental capability.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/smarter-manufacturing-with-intels-deep-learning-based-machine-vision-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Tara K. Thimmanaik, Solutions Architect at Intel, presents the “Smarter Manufacturing with Intel’s Deep Learning-Based Machine Vision” tutorial at the September 2020 Embedded Vision Summit.
As demand for smarter and more efficient manufacturing is growing, IoT technologies—including sensors, edge devices, gateways, servers and the cloud—are being used throughout the factory to compute deep learning analytics workloads at the appropriate location. Efficient data-driven manufacturing can help to reduce labor costs, increase quality and maximize profit. The biggest hindrance to achieving these outcomes is the difficulty in extracting data from vendor-locked and proprietary systems for analytics downstream.
In this presentation, Thimmanaik covers Intel’s approach to developing open, flexible and scalable solutions, including:
• Intel’s technologies such as OpenVINO, Movidius Vision Processor Units, Edge Insights Software (EIS) and deep learning algorithms
• How Intel’s offerings come together in the industrial marketplace with partnerships forged to address the constraints of manufacturing infrastructure
• Real-world examples highlighting defect detection in textile printing (where 90% accuracy at 50 fps was achieved) and smartphone screen production (where false negatives were only 0.6%)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/the-future-of-ai-is-here-today-deep-dive-into-qualcomms-on-device-ai-offerings-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director and Head of AI/ML Product Management at Qualcomm, presents the “Future of AI is Here Today: Deep Dive into Qualcomm’s On-Device AI Offerings” tutorial at the May 2022 Embedded Vision Summit.
As a leader in on-device AI, Qualcomm is in a unique position to deliver optimized and now personalized AI experiences to consumers, made possible via innovation in hardware technology and investment across the entire software stack. This investment is now deeply rooted in all of our product offerings, spread across multiple verticals from mobile to automotive.
In this talk, Sukumar explores the high-performance, low-power Hexagon processor — the core of his company’s latest 7th Generation AI Engine — and shows how the company scales it across the range of products that Qualcomm offers. He also highlights Qualcomm’s investment in advanced techniques such as the latest quantization approaches and neural architecture search to accelerate AI deployment. Finally, he shares details on how his company incorporates these technologies into AI solutions that power Qualcomm’s vision of on-device AI — and shows how these solutions are employed in real-world use cases across many verticals.
Brian Gilmore [InfluxData] | InfluxDB in an IoT Application Architecture | In...InfluxData
There are many challenges to building production IoT applications — whether deployed on the shop floor or in millions of homes. Data, specifically time series data, need not be one of them. In this session, Brian Gilmore, IoT Product Manager at InfluxData, outlines the key components of architecture for capturing and analyzing IoT data at ANY scale and showcases how he has implemented these recommendations in his own lab. You will leave this virtual talk with a blueprint for getting started yourself — this talk also covers integrations with Machine Learning and other advanced topics, so InfluxDB users of all experience are welcome!
Discover existing customer stories from various industries such as manufacturing, logistics and construction. No theoretical use cases, but in-depth insights that will help you on how to get started with IoT.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
This issue’s feature article, Tuning Autonomous Driving Using Intel® System Studio, illustrates how the tools in Intel System Studio give embedded systems and connected device developers an integrated development environment to build, debug, and tune performance and power usage. Continuing the theme of tuning edge applications, Building Fast Data Compression Code for Cloud and Edge Applications shows how to use the Intel® Integrated Performance Primitives
to speed data compression.
This document discusses AI vision and a hybrid approach using both edge and server-based analytics. It outlines some of the challenges of vision problems where data is analog, complex, and data-heavy. A hybrid approach is proposed that uses edge devices for initial analysis similar to the ventral stream, while also using servers for deeper correlation and inference like the dorsal stream. This combines the strengths of edge and server-based computing on platforms like Intel that support both CPUs and GPUs to efficiently solve real-world vision problems. Several case studies are provided as examples.
[API World 2021 ] - Understanding Cloud Native DeploymentWSO2
Microservices and APIs built for digital transformation products require agile, reliable, and scalable cloud native infrastructure to truly meet customer expectations for a great "always there" user experience. Whether deployed on-premises or hosted in a public cloud, understanding and leveraging the right approach is key to success. This session takes up where the development process leaves off, tracking the standardization of containers and container orchestration for automated deployment, including current and future platform trends WSO2 and others are following.
Industrial IoT with Intel IoT Gateway & OctobluIntel® Software
This document discusses Intel and Octoblu's partnership to enable industrial IoT solutions. It describes Intel's IoT gateway platform and Octoblu's IoT cloud platform. Together, their platforms provide a full-stack solution for connecting devices, routing data, running automation workflows, and providing security and management tools. The document also notes the large market opportunity for industrial IoT and provides an overview of how customers can deploy the joint solution either on-premises or via various cloud options.
IoTSummit: Design and architect always disconnected iot systemMarco Dal Pino
Windows 10 IoT Core provides a full-featured platform for building small-footprint, smart IoT devices. It offers built-in security, connectivity to cloud services, and access to hardware. Windows 10 IoT Core can help rapidly prototype ideas and scale solutions using Azure IoT services. It supports a wide range of hardware, and existing Windows CE applications can be migrated. The Robot Operating System (ROS) is also supported, allowing for advanced robotics applications to be developed.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/federated-edge-computing-system-architectures-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Vaidyanathan Krishnamoorthy, Edge Inference Solutions Architect at Intel, presents the “Federated Edge Computing System Architectures” tutorial at the September 2020 Embedded Vision Summit.
With ever-increasing amounts of video and other sensor data, and growing requirements for privacy and low latency, inferencing at the edge is increasingly attractive. But there are many ways to allocate and coordinate computing resources for edge inferencing. For example, to achieve scale and fault tolerance, design principles from cloud computing can be applied to create compute clusters at the edge that are managed by the cloud, an approach called “federated computing.”
In this talk, Krishnamoorthy explores a range of edge computing system architectures, with a focus on federated computing. He illustrates how these system architectures utilize Intel CPUs and accelerators to address real-world use cases in retail and industrial applications.
The number of internet-connected devices is growing exponentially, enabling an increasing number of edge applications in environments such as smart cities, retail, and industry 4.0. These intelligent solutions often require processing large amounts of data, running models to enable image recognition, predictive analytics, autonomous systems, and more. Increasing system workloads and data processing capacity at the edge is essential to minimize latency, improve responsiveness, and reduce network traffic back to data centers. Purpose-built systems such as Supermicro’s short-depth, multi-node SuperEdge, powered by 3rd Gen Intel® Xeon® Scalable processors, increase compute and I/O density at the edge and enable businesses to further accelerate innovation.
Join this webinar to discover new insights in edge-to-cloud infrastructures and learn how Supermicro SuperEdge multi-node solutions leverage data center scale, performance, and efficiency for 5G, IoT, and Edge applications.
Similar to “Transforming Enterprise Intelligence: The Power of Computer Vision and Gen AI at the Edge with OpenVINO,” a Presentation from Intel (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/07/deploying-large-language-models-on-a-raspberry-pi-a-presentation-from-useful-sensors/
Pete Warden, CEO of Useful Sensors, presents the “Deploying Large Language Models on a Raspberry Pi,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, Warden outlines the key steps required to implement a large language model (LLM) on a Raspberry Pi. He begins by outlining the motivations for running LLMs on the edge and exploring practical use cases for LLMs at the edge. Next, he provides some rules of thumb for selecting hardware to run an LLM.
Warden then walks through the steps needed to adapt an LLM for an application using prompt engineering and LoRA retraining. He demonstrates how to build and run an LLM from scratch on a Raspberry Pi. Finally, he shows how to integrate an LLM with other edge system building blocks, such as a speech recognition engine to enable spoken input and application logic to trigger actions.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/07/how-to-run-audio-and-vision-ai-algorithms-at-ultra-low-power-a-presentation-from-synaptics/
Deepak Mital, Senior Director of Architectures at Synaptics, presents the “How to Run Audio and Vision AI Algorithms at Ultra-low Power” tutorial at the May 2024 Embedded Vision Summit.
Running AI algorithms on battery-powered, low-cost devices requires a different approach to designing hardware and software. The power requirements are stringent at standby, but the device needs to be able to awaken quickly when an event is detected. The device needs to “pseudo” wake up, determine if the event needs attention, and then either go back to standby or become active to classify the event.
This multistage wake-up process and the associated intelligence requires tight orchestration of hardware and software. Apart from runtime software, the AI models must be highly optimized to fit and run on the constrained device. To show how this can be done, Mital presents a solution that combines hardware, software and AI models to enable running audio and video AI algorithms at ultra-low power.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/07/meeting-the-critical-needs-of-accuracy-performance-and-adaptability-in-embedded-neural-networks-a-presentation-from-quadric/
Aman Sikka, Chief Architect at Quadric, presents the “Meeting the Critical Needs of Accuracy, Performance and Adaptability in Embedded Neural Networks” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, Sikka explores the challenges of accuracy and performance when implementing quantized machine learning inference algorithms on embedded systems. He explains how the thoughtful use of fixed-point data types yields significant performance and efficiency gains without compromising accuracy. And he explores the need for modern SoCs to not only efficiently run current state-of-the-art neural networks but also to be able to adapt to future algorithms.
This requires industry to shift away from the approach of adding custom fixed-function accelerator blocks adjacent to legacy architectures and toward embracing flexible and adaptive hardware. This hardware flexibility not only allows SoCs to run new networks, but also enables ongoing software and compiler innovations to explore optimizations such as better data layout, operation fusion, operation remapping and operation scheduling without being constrained by a fixed hardware pipeline.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/07/build-a-tiny-vision-application-in-minutes-with-the-edge-app-sdk-a-presentation-from-midokura-a-sony-group-company/
Dan Mihai Dumitriu, Chief Technology Officer at Midokura, a Sony Group company, presents the “Build a Tiny Vision Application in Minutes with the Edge App SDK” tutorial at the May 2024 Embedded Vision Summit.
In the fast-paced world of embedded vision applications, moving rapidly from concept to deployment is crucial. In this presentation, Dumitriu introduces the Edge App Runtime and SDK—groundbreaking tools designed to streamline and accelerate the development process for edge computing solutions. Leveraging a pre-built app skeleton, the SDK simplifies the development journey, allowing developers to focus on customizing event handlers using popular high-level languages such as JavaScript and Python. This approach not only democratizes edge application development, but also significantly reduces the time to market.
With an integrated local tool that supports development, testing, building and deployment, the transition from a local environment to cloud deployment becomes seamless. Dumitriu explores how the Edge App Runtime and SDK are enabling the creation and deployment of edge applications in a matter of minutes, making edge application development more accessible and efficient than ever before.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/07/the-importance-of-memory-for-breaking-the-edge-ai-performance-bottleneck-a-presentation-from-micron-technology/
Wil Florentino, Senior Marketing Manager for Industrial/IIoT at Micron Technology, presents the “Importance of Memory for Breaking the Edge AI Performance Bottleneck” tutorial at the May 2024 Embedded Vision Summit.
In recent years there’s been tremendous focus on designing next-generation AI chipsets to improve neural network inference performance. As higher performance processors are called upon to execute ever-larger models—from vision transformers to LLMs—memory bandwidth is frequently the key performance bottleneck. With the demands for memory bandwidth and storage capacity varying across applications, it is critical to identify the right memory technologies that match the complexity and performance needs of your application.
In this talk, Florentino explores how to choose the right memory to break the performance bottleneck in edge AI systems. He also highlights recent memory technology developments that are enabling higher memory performance and capacity at the edge.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/07/intels-approach-to-operationalizing-ai-in-the-manufacturing-sector-a-presentation-from-intel/
Tara Thimmanaik, AI Systems and Solutions Architect at Intel, presents the “Intel’s Approach to Operationalizing AI in the Manufacturing Sector,” tutorial at the May 2024 Embedded Vision Summit.
AI at the edge is powering a revolution in industrial IoT, from real-time processing and analytics that drive greater efficiency and learning to predictive maintenance. Intel is focused on developing tools and assets to help domain experts operationalize AI-based solutions in their fields of expertise.
In this talk, Thimmanaik explains how Intel’s software platforms simplify labor-intensive data upload, labeling, training, model optimization and retraining tasks. She shows how domain experts can quickly build vision models for a wide range of processes—detecting defective parts on a production line, reducing downtime on the factory floor, automating inventory management and other digitization and automation projects. And she introduces Intel-provided edge computing assets that empower faster localized insights and decisions, improving labor productivity through easy-to-use AI tools that democratize AI.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/challenges-and-solutions-of-moving-vision-llms-to-the-edge-a-presentation-from-expedera/
Costas Calamvokis, Distinguished Engineer at Expedera, presents the “Challenges and Solutions of Moving Vision LLMs to the Edge” tutorial at the May 2024 Embedded Vision Summit.
OEMs, brands and cloud providers want to move LLMs to the edge, especially for vision applications. What are the benefits and challenges of doing so? In this talk, Calamvokis explores how edge AI is evolving to encompass massively increasing LLM model sizes, the use cases of local LLMs and the performance, power and chip area considerations that system architects should consider when utilizing vision-based LLMs.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/implementing-transformer-neural-networks-for-visual-perception-on-embedded-devices-a-presentation-from-verisilicon/
Shang-Hung Lin, Vice President of Neural Processing Products at VeriSilicon, presents the “Implementing Transformer Neural Networks for Visual Perception on Embedded Devices” tutorial at the May 2024 Embedded Vision Summit.
Transformers are a class of neural network models originally designed for natural language processing. Transformers are also powerful for visual perception due to their ability to model long-range dependencies and process multimodal data. Resource constraints form a central challenge when deploying transformers on embedded platforms. Transformers demand substantial memory for parameters and intermediate computations. Further, the computations involved in self-attention create challenging computation requirements. Energy efficiency adds another layer of complexity.
Mitigating these challenges requires a multifaceted approach. Optimization techniques like quantization ameliorate memory constraints. Pruning and sparsity techniques, removing less critical connections, alleviate computation demands. Knowledge distillation transfers knowledge from larger models to compact models. Lin also discusses hardware accelerators such as NPUs customized for transformer workloads, and software techniques for efficiently mapping transformer models to hardware accelerators.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/a-cutting-edge-memory-optimization-method-for-embedded-ai-accelerators-a-presentation-from-7-sensing-software/
Arnaud Collard, Technical Leader for Embedded AI at 7 Sensing Software, presents the “Cutting-edge Memory Optimization Method for Embedded AI Accelerators” tutorial at the May 2024 Embedded Vision Summit.
AI hardware accelerators are playing a growing role in enabling AI in embedded systems such as smart devices. In most cases NPUs need a dedicated, tightly coupled high-speed memory to run efficiently. This memory has a major impact on performance, power consumption and cost. In this presentation, Collard dives deep into his company’s state-of-the-art memory optimization method that significantly decreases the size of the required NPU memory. This method utilizes processing by stripes and processing by channels to obtain the best compromise between memory footprint reduction and additional processing cost.
Through this method, the original neural network is split into several pieces that are scheduled on the NPU. Collard shares results that show this technique yields large memory footprint reductions with moderate increases in processing time. He also presents his company’s proprietary ONNX-based tool that automatically finds the optimal network configuration and schedules the subnetworks for execution.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/efficiency-unleashed-the-next-gen-nxp-i-mx-95-applications-processor-for-embedded-vision-a-presentation-from-nxp-semiconductors/
James Prior, Senior Product Manager at NXP Semiconductors, presents the “Efficiency Unleashed: The Next-gen NXP i.MX 95 Applications Processor for Embedded Vision” tutorial at the May 2024 Embedded Vision Summit.
Machine vision is the most obvious way to help humans live better, enabling hundreds of applications spanning security, monitoring, inspection and more. Modern edge processors need private on-device and scalable hybrid machine learning capabilities to offer enough longevity to stay relevant in industrial and commercial IoT markets. In this talk, Prior presents the upcoming i.MX 95 family of applications processors.
The i.MX 95 features a new, self-developed neural processing unit from NXP—the eIQ Neutron NPU. Designed to scale from today’s conventional neural networks to tomorrow’s transformer-based models, the eIQ Neutron NPU scalable architecture delivers edge AI capabilities at high efficiency with award-winning tools, combined with chip-level security and privacy features. The i.MX 95 applications processor family features powerful processing and vision capabilities combined with safety, security and expandable high-speed interfaces.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/optimized-vision-language-models-for-intelligent-transportation-system-applications-a-presentation-from-nota-ai/
Tae-Ho Kim, Co-founder and CTO of Nota AI, presents the “Optimized Vision Language Models for Intelligent Transportation System Applications” tutorial at the May 2024 Embedded Vision Summit.
In the rapidly evolving landscape of intelligent transportation systems (ITSs), the demand for efficient and reliable solutions has never been greater. In this presentation, Kim shows how an innovative approach—optimized vision language models—can dramatically enhance the accuracy and robustness of computer vision solutions for ITSs.
Kim also illustrates how optimized vision language models can be implemented in real time at the edge, enabling intelligent decision-making for applications such as traffic management, vehicle recognition and pedestrian safety. Finally, he explains how Nota AI is utilizing optimized vision language models to revolutionize numerous ITS applications, leading to safer, more efficient and environmentally friendly transportation systems.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/image-signal-processing-optimization-for-object-detection-a-presentation-from-nextchip/
Young-Jun Yoo, Executive Vice President at Nextchip, presents the “Image Signal Processing Optimization for Object Detection” tutorial at the May 2024 Embedded Vision Summit.
This talk delves into the challenges and optimization strategies in image signal processing (ISP) for enhancing object detection in advanced driver-assistance systems (ADAS). Through real-world examples, Yoo explores the critical role of image tuning in addressing corner cases and improving detection accuracy.
The presentation covers ADAS sensing methodologies and verification processes, emphasizing the importance of image quality in sensor data. Yoo provides practical insights into image tuning techniques, including day-night transition handling and environment-based adjustments. He shares valuable knowledge on optimizing ISP to ensure robust object detection, enhancing safety and performance in automotive applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/squeezing-the-last-milliwatt-and-cubic-millimeter-from-smart-cameras-using-the-latest-fpgas-and-drams-a-presentation-from-lattice-semiconductor-and-etron-technology-america/
Hussein Osman, Segment Marketing Director at Lattice Semiconductor, and Richard Crisp, Vice President and Chief Scientist at Etron Technology America, co-presents the “Squeezing the Last Milliwatt and Cubic Millimeter from Smart Cameras Using the Latest FPGAs and DRAMs” tutorial at the May 2024 Embedded Vision Summit.
Attaining the lowest power, size and cost for a smart camera requires carefully matching the hardware to the actual application requirements. General-purpose media processors may appear attractive and easy to use, but often include unneeded features which increase system size, weight, power and cost. “Right-sizing” the camera design for the application requirements can save significant power, cost, size and weight.
In this talk, Osman and Crisp show how you can leverage an advanced power-optimized FPGA incorporating a soft RISC-V core combined with a video-bandwidth, low-pin-count DRAM to cut power consumption roughly in half for endpoint smart cameras used in automotive, industrial and other applications. They examine techniques for reducing power, cost and size including system architecture, memory architecture, packaging, and signaling and termination schemes. They also explore techniques for enhancing system reliability.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/maximize-your-ai-compatibility-with-flexible-pre-and-post-processing-a-presentation-from-flex-logix/
Jayson Bethurem, Vice President of Marketing and Business Development at Flex Logix, presents the “Maximize Your AI Compatibility with Flexible Pre- and Post-processing” tutorial at the May 2024 Embedded Vision Summit.
At a time when IC fabrication costs are skyrocketing and applications have increased in complexity, it is important to minimize design risks and maximize flexibility. In this presentation, you’ll learn how embedding FPGA technology can solve these problems—expanding your market access by enabling more external interfaces, accelerating your compute envelope and increasing data security.
Embedded FPGA IP is highly efficient for pre- and post-processing data and can implement a variety of signal processing tasks such as image signal processing (defective pixel and color correction, for example), packet processing from network interfaces and signal processing from data converters (filtering). Additionally, this IP can manage data movement in and out of your AI engine as well as provide an adaptable protocol layer to connect to a variety of external interfaces, like USB and MIPI cameras. Flex Logix eFPGA IP is easy to integrate, high performing, lightweight and supported across more process nodes than any other supplier’s.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/addressing-tomorrows-sensor-fusion-and-processing-needs-with-cadences-newest-processors-a-presentation-from-cadence/
Amol Borkar, Product Marketing Director at Cadence, presents the “Addressing Tomorrow’s Sensor Fusion and Processing Needs with Cadence’s Newest Processors” tutorial at the May 2024 Embedded Vision Summit.
From ADAS to autonomous vehicles to smartphones, the number and variety of sensors used in edge devices is increasing: radar, LiDAR, time-of-flight sensors and multiple cameras are more and more common. And, as sensors have improved, the data rates associated with them have also increased. Traditionally, a dedicated processor has been utilized to process data from each sensor independently. Today, however, there is a growing need for a single, unified processor capable of processing multimodal sensor data utilizing both classical and AI algorithms and implementing sensor fusion for robust perception.
In this talk, Borkar introduces the new Vision 341 DSP and Vision 331 DSP from Cadence. These cores provide a versatile single-DSP solution for various workloads, including image sensing, radar, LiDAR and AI tasks.He explores the architecture of these new processors, highlights their performance and efficiency and outlines the associated developer tools and software building blocks.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip, presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/silicon-slip-ups-the-ten-most-common-errors-processor-suppliers-make-number-four-will-amaze-you-a-presentation-from-bdti/
Phil Lapsley, Co-founder and Vice President of BDTI, presents the “Silicon Slip-ups: The Ten Most Common Errors Processor Suppliers Make (Number Four Will Amaze You!)” tutorial at the May 2024 Embedded Vision Summit.
For over 30 years, BDTI has provided engineering, evaluation and advisory services to processor suppliers and companies that use processors in products. The company has seen a lot, including some classic mistakes. (You know, things like: the chip has an accelerator, but no easy way to program it… or you can only program it using an obscure proprietary framework. Or it has an ISP that only works with one image sensor. Or the development tools promise a lot but fall far short. Or the device drivers don’t work. Or the documentation is deficient.)
Phil Lapsley, BDTI co-founder, presents a fun and fast-paced review of some of the most common processor provider errors, ones seen repeatedly at BDTI. If you’re a processor provider, you’ll learn things you can do to avoid these goofs—and if you’re a processor user, you’ll learn about things to watch for when selecting your next processor!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-arms-machine-learning-solution-enables-vision-transformers-at-the-edge-a-presentation-from-arm/
Stephen Su, Senior Segment Marketing Manager at Arm, presents the “How Arm’s Machine Learning Solution Enables Vision Transformers at the Edge” tutorial at the May 2024 Embedded Vision Summit.
AI at the edge has been transforming over the last few years, with newer use cases running more efficiently and securely. Most edge AI workloads were initially run on CPUs, but machine learning accelerators have gradually been integrated into SoCs, providing more efficient solutions. At the same time, ChatGPT has driven a sudden surge in interest in transformer-based models, which are primarily deployed using cloud resources. Soon, many transformer-based models will be modified to run effectively on edge devices.
In this presentation, Su explains the role of transformer-based models in vision applications and the challenges of implementing transformer models at the edge. Next, he introduces the latest Arm machine learning solution and how it enables the deployment of transformer-based vision networks at the edge. Finally, he shares an example implementation of a transformer-based embedded vision use case and uses this to contrast such solutions with those based on traditional CNN networks.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/nx-evos-a-new-enterprise-operating-system-for-video-and-visual-ai-a-presentation-from-network-optix/
Nathan Wheeler, Co-founder and CEO of Network Optix, presents the “Nx EVOS: A New Enterprise Operating System for Video and Visual AI” tutorial at the May 2024 Embedded Vision Summit.
In most software domains, developers don’t write code at the bare-metal level; they build applications on top of operating systems, which provide commonly needed functionality. Yet, today, developers of video and AI applications are effectively writing their applications at the bare-metal level, building the “plumbing” themselves to handle basics like device discovery, storage management, security and model deployment. These developers need an operating system that supports their applications so they can focus on what really matters: the core functionality of their product.
Nx EVOS is the world’s first enterprise video operating system. EVOS revolutionizes video management, offering device discovery, bandwidth optimization and security features—in cloud and on device. Its support for AI pipelines and user management enables scalable deployment of AI applications across environments and platforms, and it’s trusted by leading organizations such as SpaceX. In this presentation, you’ll learn how Nx EVOS can save you time and effort in building your next vision product.
Best Practices for Effectively Running dbt in Airflow.pdfTatiana Al-Chueyr
As a popular open-source library for analytics engineering, dbt is often used in combination with Airflow. Orchestrating and executing dbt models as DAGs ensures an additional layer of control over tasks, observability, and provides a reliable, scalable environment to run dbt models.
This webinar will cover a step-by-step guide to Cosmos, an open source package from Astronomer that helps you easily run your dbt Core projects as Airflow DAGs and Task Groups, all with just a few lines of code. We’ll walk through:
- Standard ways of running dbt (and when to utilize other methods)
- How Cosmos can be used to run and visualize your dbt projects in Airflow
- Common challenges and how to address them, including performance, dependency conflicts, and more
- How running dbt projects in Airflow helps with cost optimization
Webinar given on 9 July 2024
Vulnerability Management: A Comprehensive OverviewSteven Carlson
This talk will break down a modern approach to vulnerability management. The main focus is to find the root cause of software risk that may expose your organization to reputation damage. The presentation will be broken down into 3 main area, potential risk, occurrence, and exploitable risk. Each segment will help professionals understand why vulnerability management programs are so important.
BLOCKCHAIN TECHNOLOGY - Advantages and DisadvantagesSAI KAILASH R
Explore the advantages and disadvantages of blockchain technology in this comprehensive SlideShare presentation. Blockchain, the backbone of cryptocurrencies like Bitcoin, is revolutionizing various industries by offering enhanced security, transparency, and efficiency. However, it also comes with challenges such as scalability issues and energy consumption. This presentation provides an in-depth analysis of the key benefits and drawbacks of blockchain, helping you understand its potential impact on the future of technology and business.
Integrating Kafka with MuleSoft 4 and usecaseshyamraj55
In this slides, the speaker shares their experiences in the IT industry, focusing on the integration of Apache Kafka with MuleSoft. They start by providing an overview of Kafka, detailing its pub-sub model, its ability to handle large volumes of data, and its role in real-time data pipelines and analytics. The speaker then explains Kafka's architecture, covering topics such as partitions, producers, consumers, brokers, and replication.
The discussion moves on to Kafka connector operations within MuleSoft, including publish, consume, commit, and seek, which are demonstrated in a practical demo. The speaker also emphasizes important design considerations like connector configuration, flow design, topic management, consumer group management, offset management, and logging. The session wraps up with a Q&A segment where various Kafka-related queries are addressed.
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Data Integration Basics: Merging & Joining DataSafe Software
Are you tired of dealing with data trapped in silos? Join our upcoming webinar to learn how to efficiently merge and join disparate datasets, transforming your data integration capabilities. This webinar is designed to empower you with the knowledge and skills needed to efficiently integrate data from various sources, allowing you to draw more value from your data.
With FME, merging and joining different types of data—whether it’s spreadsheets, databases, or spatial data—becomes a straightforward process. Our expert presenters will guide you through the essential techniques and best practices.
In this webinar, you will learn:
- Which transformers work best for your specific data types.
- How to merge attributes from multiple datasets into a single output.
- Techniques to automate these processes for greater efficiency.
Don’t miss out on this opportunity to enhance your data integration skills. By the end of this webinar, you’ll have the confidence to break down data silos and integrate your data seamlessly, boosting your productivity and the value of your data.
Uncharted Together- Navigating AI's New Frontiers in LibrariesBrian Pichman
Journey into the heart of innovation where the collaborative spirit between information professionals, technologists, and researchers illuminates the path forward through AI's uncharted territories. This opening keynote celebrates the unique potential of special libraries to spearhead AI-driven transformations. Join Brian Pichman as we saddle up to ride into the history of Artificial Intelligence, how its evolved over the years, and how its transforming today's frontiers. We will explore a variety of tools and strategies that leverage AI including some new ideas that may enhance cataloging, unlock personalized user experiences, or pioneer new ways to access specialized research. As with any frontier exploration, we will confront shared ethical challenges and explore how joint efforts can not only navigate but also shape AI's impact on equitable access and information integrity in special libraries. For the remainder of the conference, we will equip you with a "digital compass" where you can submit ideas and thoughts of what you've learned in sessions for a final reveal in the closing keynote.
EuroPython 2024 - Streamlining Testing in a Large Python CodebaseJimmy Lai
Maintaining code quality through effective testing becomes increasingly challenging as codebases expand and developer teams grow. In our rapidly expanding codebase, we encountered common obstacles such as increasing test suite execution time, slow test coverage reporting and delayed test startup. By leveraging innovative strategies using open-source tools, we achieved remarkable enhancements in testing efficiency and code quality.
As a result, in the past year, our test case volume increased by 8000, test coverage was elevated to 85%, and Continuous Integration (CI) test duration was maintained under 15 minute
Types of Weaving loom machine & it's technologyldtexsolbl
Welcome to the presentation on the types of weaving loom machines, brought to you by LD Texsol, a leading manufacturer of electronic Jacquard machines. Weaving looms are pivotal in textile production, enabling the interlacing of warp and weft threads to create diverse fabrics. Our exploration begins with traditional handlooms, which have been in use since ancient times, preserving artisanal craftsmanship. We then move to frame and pit looms, simple yet effective tools for small-scale and traditional weaving.
Advancing to modern industrial applications, we discuss power looms, the backbone of high-speed textile manufacturing. These looms, integral to LD Texsol's product range, offer unmatched productivity and consistent quality, essential for large-scale apparel, home textiles, and technical fabrics. Rapier looms, another modern marvel, use rapier rods for versatile and rapid weaving of complex patterns.
Next, we explore air and water jet looms, known for their efficiency in lightweight fabric production. LD Texsol's state-of-the-art electronic Jacquard machines exemplify technological advancements, enabling intricate designs and patterns with precision control. Lastly, we examine dobby looms, ideal for medium-complexity patterns and versatile fabric production.
This presentation will deepen your understanding of weaving looms, their applications, and the innovations LD Texsol brings to the textile industry. Join us as we weave through the history, technology, and future of textile production.
In Deloitte's latest article, discover the impact of India's
three new criminal laws, effective July 1, 2024. These laws, replacing the IPC,
CrPC, and Indian Evidence Act, promise a more contemporary, concise, and
accessible legal framework, enhancing forensic investigations and aligning with
current societal needs.
Learn how these Three New Criminal Laws will shape the
future of criminal justice in India
Read More Deloitte India's Latest Article on Three New
Criminal Laws
https://www2.deloitte.com/in/en/pages/finance/articles/three-new-criminal-laws-in-India.html
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-InTrustArc
Six months into 2024, and it is clear the privacy ecosystem takes no days off!! Regulators continue to implement and enforce new regulations, businesses strive to meet requirements, and technology advances like AI have privacy professionals scratching their heads about managing risk.
What can we learn about the first six months of data privacy trends and events in 2024? How should this inform your privacy program management for the rest of the year?
Join TrustArc, Goodwin, and Snyk privacy experts as they discuss the changes we’ve seen in the first half of 2024 and gain insight into the concrete, actionable steps you can take to up-level your privacy program in the second half of the year.
This webinar will review:
- Key changes to privacy regulations in 2024
- Key themes in privacy and data governance in 2024
- How to maximize your privacy program in the second half of 2024
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
25. AI: The New Age
Solving the World’s Toughest
Challenges, Together.
Opt-In for Early Access When
Registration Opens!
Calling All Developers & Technologists!
From front-end, web, app devs to back-end, full-stack, database &
DevOps to data scientists & researchers, and more:
Learn, collaborate, and solve at Intel Innovation –
an event for developers by developers.
www.intel.com/innovation
Save the Date:
September 24-25, 2024
San Jose Convention Center, CA
Hear from leading industry luminaries, technologists & start-up
entrepreneurs in the field of AI.
Learn the breadth of future technology advancements in AI through
keynotes, sessions, birds of a feathers, and hands-on labs.
Get the latest AI development tools, hands-on experience & join on-site
Hackathons to optimize your AI code & workflows.
Share unique ideas and perspectives and collaborate with your peers.
26. Join us at CVPR!
Hackster OpenVINO Challenge
Ends June 1st
https://www.hackster.io/contests/ope
nvino2024/
OpenVINO at CVPR
Tutorial Date: June 17th
https://paularamo.github.io/cvp
r-2024/