devwiki:nvidia

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
devwiki:nvidia [2022/09/20 03:22] yingdevwiki:nvidia [2022/09/21 09:00] (current) – [Nvidia for AI] ying
Line 12: Line 12:
     * in addition, digital twins also updates with the main physical copy, anythings happen to main one, the digital twin one update to the same.     * in addition, digital twins also updates with the main physical copy, anythings happen to main one, the digital twin one update to the same.
   * nivida omniverse is the tool to simulate the real world, to build digital twins,   * nivida omniverse is the tool to simulate the real world, to build digital twins,
 +    * its tutorials: https://docs.omniverse.nvidia.com/plat_omniverse/common/video-list.html
     * it bridge realtime collaboration between different users and different graphic softwares.     * it bridge realtime collaboration between different users and different graphic softwares.
     * omniverse audio2face: to generate face animation from audio     * omniverse audio2face: to generate face animation from audio
     * nvidia OVX server provide hardware support to build larget scale digital twins     * nvidia OVX server provide hardware support to build larget scale digital twins
 +    * omniverse system: digital twins+ robotics; design +content creation; integration; rendering; sensors; asset lib;
 +      * AI: drive, ISAAC (for move+manipulate stuff), metropolis (auto infrastructure), holoscan (robotic medical)
 +      * replicator: generate + train synthetic data for train+test AI model
 +      * omnigraph, behavior, animation: run data center scale 3d application
 +      * avatar (wip): build digital humans
 +      * nvidia open source Material Definition Language (MDL): https://developer.nvidia.com/rendering-technologies/mdl-sdk
 +        * https://developer.nvidia.com/rendering-technologies/mdl-sdk
 +        * tut: https://www.nvidia.com/en-us/on-demand/session/gtcspring22-se2310/?playlistId=playList-5168fd54-82d2-4179-a612-491b68322489
 +        * tut: https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41207/?playlistId=playList-5168fd54-82d2-4179-a612-491b68322489
 +
 +====== Nvidia MDL ======
 +  * to define physically based material
 +  * store specification for material exchange
 +  * render-algorithm agnostic
 +  * designed for high performance on GPU
 +
 +====== Nvidia Cuda programming ======
 +
 +  * video info: How CUDA Programming Works
 +    * https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41487/?playlistId=playList-87118008-d10b-42f9-8c57-a50bbf006662
 +    * the cube programming is designed based on how the GPU hardware works, and GPU also designed how GPU normally programms for best performance
 +    * Cuda programming is: 
 +      * each thread has its thread ID, that its id determine which block of data it works on, and all threads finish the data together at the same data.
 +      * optimize how code use memory can be important to fit more thing in the fixed size memory by better arrangement and swapping thing in memory blocks.
 +
 +====== CV-Cuda ======
 +
 +  * CV cuda: computer vision with cuda.
 +
 +====== Nivdia RTX stack ======
 +
 +  * 1st Gen: VkRay, DXR, DLSS1
 +  * 2nd Gen: 
 +    * real-time denoise: spatial denoise
 +    * caustics
 +    * RTXDI: raytrace direct illumination, casting shadows from all lights, emissive surface
 +    * RTXGI: real-time multiple bounce indirect lighting
 +    * Reflex
 +    * DLSS2: deep learning super resolution, AI generat pixel
 +  * 3rd Gen: 
 +    * Displaced micro-meshes
 +    * 2D SGM optical flow, shader execution reordering, real-time path tracing, opacity micro-maps, 
 +    * DLSS3: deep learning super resolution, AI frame generator
 +
 +====== Nvidia GPU architecture ======
 +
 +| core   ^ turing ^ ampere ^ ada |
 +| shader | 16     | 40     | 90  |
 +| RT     | 49     | 78     | 200 |
 +| tensor | 130    | 320    | 1400 |
 +| OFA    |        | 126    | 300  |
 +
 +====== Nvidia for AI ======
 +
 +  * large language model: enable single model to do various different task with one single model, context aware output. like text related, image related.
 +  * NeMo LLM service, Prompt learning framework, to promp learn with pre-trained LLM for specific task.
 +  * recommed system: like in shopping, social network
  • devwiki/nvidia.1663644155.txt.gz
  • Last modified: 2022/09/20 03:22
  • by ying