SMART is an active member of multiple industry-leading standards organizations and consortiums. SMART provides design expertise, prototyping and eco-system development support for cutting-edge memory, storage and accelerator technologies. Learn More
The M1HC drive is also built to operate over 30,000 m of altitude, or 100,000 ft. and has a comprehensive set of features. Engineers designing for high-capacity, highly secure, applications should consider the M1HC. Learn More.
The M2 drive employs pFail back up technology in case of accidental power failure. SMART's M2 SSDs are designed to meet the needs of cost-sensitive applications where temperature tolerances are important. Learn More.
Utilizing a SATA 6Gb interface, the M1 drive has 525MB/s sequential read and 500MB/s sequential write speeds. Capacities of 240, 480, 960, and 1920GB are available using multi-level cell (MLC) NAND Flash. Learn More.
With DuraFlash™, SMART is committed in offering a wide range of Flash storage form factors designed and manufactured to meet the heavy demands of accelerating embedded applications in the telecom, networking, storage, industrial control, medical, IIoT, transportation, and video surveillance market segments.
SMART offers commercial and industrial temp embedded flash-based products, including M.2 SATA and M.2 PCIe NVMe, SATA DOM, mSATA, slim SATA, and eUSB form factors equipped with SATA II, SATA III, PCIe and USB interfaces.
SMART designs and manufactures commercial and industrial temp removable flash memory products in a wide variety of form factors including CF Card, SD Card, microSD, 2.5" SATA SSD, U.2 PCIe NVMe and USB Flash Drives.
CCIX (Cache Coherent Interconnect for Accelerators) is the interconnect protocol which enables low-latency accesses to memory and devices sitting across the PCIe bus. CCIX implementations are available with system support from hardware manufacturers, and a stable software eco-system.
The CCIX® standard allows processors based on different instruction set architectures to extend the benefits of cache coherent, peer processing to a number of acceleration devices including FPGAs, GPUs, network/storage adapters, intelligent networks and custom ASICs.
CCIX simplifies the development and adoption by extending well-established data center hardware and software infrastructure. This ultimately allows system designers to seamlessly integrate the right combination of heterogeneous components to address their specific system needs.
CCIX implementations available from silicon manufacturers, with infrastructure supported by IP providers and a stable software eco-system
Low latency access to data, whether it resides in the host memory or accelerator memory
Enables direct addressing of memory buffers which reside on accelerator hardware, thereby eliminating PCIe IO remapping of addresses and BAR configurations
Leveraging existing PCIe PHY and Data link infrastructure
Additional transfer-rate of 25Gbps beyond PCIe Gen4 specifications are available
Use Case #1:
Host Memory Expansion over PCIe Bus with Computation Storage Services
Data-path acceleration for offloading security, authentication and authorization, such as:
In-line data compression and correction with custom implementations
In-line encryption and decryption using user modifiable keys
Key management functions for authorization and session management
Offloading parts of Video applications or NLP (Natural Language Processing) applications to Compute engine (FPGA) inside the device:
Vector Atomics to help improve CPU load/store efficiency for DSP applications such as MAX(), MIN(), SORT(), MEAN().
Enabling Low latency Memory Access from Accelerator to Host memory
SMART NIC and Network packet filtering, where large flow tables are present in host memory, but smart NIC uses these rules for packet filtering
FinTech application, where specialized compute engines like FPGA and GPU on accelerator card operate directly on large data present in host ,memory
Use Case #3:
Fine Grain Data Sharing between Host Applications and Accelerator Function
Data Analytics and Video Processing, where algorithm is split into multiple stages and spawned on different hardware like Host CPU and GPU or FPGA based Accelerators. Results from each computational stage can be pipelined by snooping memory across the fabric, using CCIX hardware.
In Memory Databases or Caching Applications, where relational data resident in host memory and accelerator engines offloads CPU by running queries without intervening CPU.
Graph Search Algorithms, where information collected from various IOT devices is in large host memory, and GPU or FPGA based accelerator, piece-wise, processes the data, without copying it on its local buffers.
For more information on the CCIX related products, click here.