mastodon.xyz is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance, open to everyone, but mainly English and French speaking.

Administered by:

Server stats:

811
active users

#gpucomputing

0 posts0 participants0 posts today
apfeltalk :verified:<p>NVIDIA stellt DGX Spark und DGX Station vor: KI-Supercomputer für den Schreibtisch<br>NVIDIA hat auf der GTC 2025 zwei neue KI-Supercomputer vorgestellt, die erstmals Data-Center-Leistung auf den Desktop bringen<br><a href="https://www.apfeltalk.de/magazin/news/nvidia-stellt-dgx-spark-und-dgx-station-vor-ki-supercomputer-fuer-den-schreibtisch/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">apfeltalk.de/magazin/news/nvid</span><span class="invisible">ia-stellt-dgx-spark-und-dgx-station-vor-ki-supercomputer-fuer-den-schreibtisch/</span></a><br><a href="https://creators.social/tags/KI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KI</span></a> <a href="https://creators.social/tags/News" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>News</span></a> <a href="https://creators.social/tags/DataScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataScience</span></a> <a href="https://creators.social/tags/DGXSpark" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DGXSpark</span></a> <a href="https://creators.social/tags/DGXStation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DGXStation</span></a> <a href="https://creators.social/tags/GPUComputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPUComputing</span></a> <a href="https://creators.social/tags/GraceBlackwell" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GraceBlackwell</span></a> <a href="https://creators.social/tags/HighPerformanceComputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HighPerformanceComputing</span></a> <a href="https://creators.social/tags/KIEntwicklung" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KIEntwicklung</span></a> <a href="https://creators.social/tags/KISupercomputer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KISupercomputer</span></a> <a href="https://creators.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://creators.social/tags/NVIDIADGX" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NVIDIADGX</span></a></p>
Leshem Choshen<p>And compression is now super fast!<br>💻Performance on Mac M1:<br>✅𝐂𝐨𝐦𝐩𝐫𝐞𝐬𝐬𝐢𝐨𝐧: 7 GB/s<br>✅𝐃𝐞𝐜𝐨𝐦𝐩𝐫𝐞𝐬𝐬𝐢𝐨𝐧: 8 GB/s<br>Wait till multithreading happens on GPU and you only decompress on demand</p><p><a href="https://sigmoid.social/tags/compression" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>compression</span></a><br> <br><a href="https://sigmoid.social/tags/llms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llms</span></a><br> <br><a href="https://sigmoid.social/tags/GPUComputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPUComputing</span></a><br> <br><a href="https://sigmoid.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a></p><p>𝐏𝐚𝐩𝐞𝐫: alphaxiv.org/abs/2411.05239</p>
MWibral<p>In the long run it seems we have to replace <a href="https://mastodon.social/tags/opencl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opencl</span></a> in our scientific software, which used pyopencl for <a href="https://mastodon.social/tags/GPUcomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPUcomputing</span></a> on all vendors' cards. Which way should we go? <br><a href="https://mastodon.social/tags/SYCL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SYCL</span></a>? <br>We want <a href="https://mastodon.social/tags/FOSS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FOSS</span></a>, vendor neutrality, longevity of the software and an easy way to use it from python (ah, and performance, of course)</p>
Canonical Ubuntu<p>Going to Nvidia GTC? </p><p>Visit us at booth 1422 to talk about how we support AI/ML from desktop to cloud to edge. </p><p>Then join us for drinks and tacos at Continental Bar on March 20 from 7 pm to 10 pm. </p><p><a href="https://ubuntu.social/tags/gtc19" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GTC19</span></a> <a href="https://ubuntu.social/tags/gpucomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPUComputing</span></a> <a href="https://ubuntu.social/tags/kubeflow" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubeflow</span></a><br><a href="http://bit.ly/2XZDpqr" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">http://</span><span class="">bit.ly/2XZDpqr</span><span class="invisible"></span></a></p>
Ubuntu News & Updates<p>Going to Nvidia GTC? </p><p>Visit us at booth 1422 to talk about how we support AI/ML from desktop to cloud to edge. </p><p>Then join us for drinks and tacos at Continental Bar on March 20 from 7 pm to 10 pm. </p><p><a href="https://mastodon.social/tags/gtc19" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GTC19</span></a> <a href="https://mastodon.social/tags/gpucomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPUComputing</span></a> <a href="https://mastodon.social/tags/kubeflow" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubeflow</span></a><br><a href="https://t.co/f1fWlUpEn1" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">t.co/f1fWlUpEn1</span><span class="invisible"></span></a> tweeted by <span class="h-card"><a href="https://mastodon.social/@ubuntu" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>ubuntu</span></a></span></p>
heise online (inoffiziell)Nvidia nutzt die Computer Vision and Pattern Recognition Conference zur Veröffentlichung mehrerer Machine-Learning-Projekte. <a href="https://www.heise.de/developer/meldung/Nvidia-veroeffentlicht-Code-fuer-beschleunigtes-maschinelles-Lernen-4086829.html" rel="nofollow noopener noreferrer" target="_blank">www.heise.de/developer/meldung…</a> #<a href="https://squeet.me/search?tag=GPUComputing" class="" rel="nofollow noopener noreferrer" target="_blank">GPUComputing</a> #<a href="https://squeet.me/search?tag=MachineLearning" class="" rel="nofollow noopener noreferrer" target="_blank">MachineLearning</a> #<a href="https://squeet.me/search?tag=Nvidia" class="" rel="nofollow noopener noreferrer" target="_blank">Nvidia</a>