Arjun Dube

Principal Systems Software Engineer

Arjun
Dube

Profile+picture+LEGO.jpg

Objective

build Things that last

That is my guiding principle when it comes to building software, teams and relationships. This does sometimes make my passion for innovating at the bleeding edge more difficult because I will forgo taking shortcuts whenever possible to create a long term solution. I have found however that putting the right machinery in place early on will be a force multiplier to future progress and will make you more resilient to sudden changes in the ecosystem you inhabit. A case in point is my current project bringing the best of what NVIDIA has to offer to the Apple spatial computing ecosystem which has been a landmark collaboration between our two companies and has been six years in the making. You can see a quick preview of our work being showcased at GTC 25’ in the header video to teleoperate and train a humanoid robot and configure a virtual car. This was accomplished using our open source SDK that allows streaming of photorealistic, physically accurate, simulations from NVIDIA Omniverse to Apple Vision Pro and iPad devices, seamlessly integrating virtual content into the real world and allowing you to interact with it. This could be a car, a robot or anything we can simulate which is anything you can imagine.

Experience

NVIDIA, Santa Clara - Lead Cloud XR

September 2020 - Present

I lead the NVIDIA CloudXR team responsible for streaming augmented and virtual reality content from cloud servers and local workstations to a variety of mobile devices including Head Mounted Displays, Tablets and Phones. Our SDK has two key components, a server side driver that allows you to take advantage of the rendering and simulation horsepower of your NVIDIA GPUs wherever they are deployed and a set of client side libraries that can re-render this content on mobile devices maintaining the fidelity needed for their pixel dense displays while staying within their limited power envelope and effectively hiding the network latency.

  • Lead the development of our SDK which comprises all aspects of the software development lifecycle from brainstorming ideas to approving designs to writing and reviewing code. Due to the very high performance requirements for streaming multiple 4k 90fps video streams this entails working with highly optimized C++ code and having in-depth knowledge of the video pipeline on the server and client. Often this also requires interop with other languages to unlock the capabilities of a particular client device so our codebase contains Swift, JAVA and Typescript as well as shaders for the various client and server graphics runtimes

  • Work closely with HMD manufacturers to bring up CloudXR client SDKs for their platforms. This includes collaborating with their engineering teams on optimizing performance and quality, getting APIs we need into their OSs and jointly shipping code. Most recently this resulted in a landmark collaboration between Apple and NVIDIA which was announced at the GTC '24 keynote and entered general availability December '24. Other ongoing partnerships include Meta, HTC and Pico

  • Drive developer engagement of our SDKs by leading the teams that create documentation, write samples and distribute our components via GitHub. I also guide the creation of in-person labs for example at Siggraph 24' and GTC 25' which drew over 300 developers who have since been running over 1000 CloudXR sessions per week as they develop their own apps and services

  • Lead the development of our first party integration with NVIDIA Omniverse to enable out of the box XR streaming capabilities without installing a separate driver. This integration involved leading a cross-organizational team to solve the challenges of rendering photorealistic content and compressing it down to a format that could be delivered it over an internet connection. We had to make use of DLSS, RTX and NVENC in novel ways to accomplish our goals of rendering 6k per eye at 45 fps and re-rendering it at 90 fps on the client to produce content that seamlessly blended into the real world

  • Ideated and lead the development of a patent pending approach to depth based reprojection using a locally rendered mesh to represent remote rendered content. This mesh can be updated less frequently than the device framerate while still yielding world locked, high quality content and can also efficiently defoveate the streamed content saving memory bandwidth and GPU time on the client device. This approach can also be implemented across a variety of client frameworks without underlying access to their system compositors which is very beneficial on more restrictive platforms that assume local rendering

  • Prior to leading the team I developed several core pieces of our SDK which have been widely adopted by our partners and developers including, encryption and authorization, bi-directional low-latency audio, QoS optimizations and pose prediction

VMware, Palo Alto - Architect XR

November 2018 - September 2020

I was the software Architect for VMware's efforts around Spatial computing. This project was pitched by myself and two colleagues to our Office of the CTO accelerator program XLabs. It was approved for three years of funding by our ex-CTO Ray O'Farrel and we exited this program in 2020 and graduated into the end user compute business unit at VMware.

  • Hired and onboarded a team of eight engineers which involved creating a HackerRank test focused on Spatial Computing and playing the role of a "bar-raiser" in our interview loops to make sure we were assessing the right skills

  • Created the high level product architecture and validated it with our Technical Fellow community and worked with individual engineers to create their detailed functional specifications to establish a technical roadmap for our funding period (three years)

  • Participated in customer briefings (Gulfstream, ARAMCO, etc..) and partner discussions (NVIDIA, Facebook, MagicLeap, etc...) to provide technical expertise with the aim of creating and solidifying these relationships

  • Created a set of contribution processes (Coding standards, reviews and unit/functional testing) and established tooling (Unity, Gitlab etc...) that would allow us to execute in a space totally new to VMware

  • Owned on time delivery of releases for major milestones such as VMworld 2019 where we presented our augmented workflows solution to the VMworld customer/partner community and Michael Dell

  • Provided technical leadership to the team including thought leadership (Two filed patents), making high impact technical decisions (When to use and contribute back to OSS for example VRTK), conducting code-reviews for major features and mentoring junior employees (three interns)

  • Developed a patent pending cloud based object recognition solution that would enable Augmented Reality devices to stream frames from their camera to a server which would run inferences using CV and then reproject those bounding boxes back into three dimensional world space and render on device

  • Developed a patent pending schema based approach to create both VR immersive training and AR workflow guidance from a single source of truth that establishes the components of the workflow, their spatial relationships in an assembled state and the steps required to connect them

  • Created a POC demonstrating streaming a Virtual Reality application using the NVIDIA CloudXR protocol from a virtual machine running in VMware's SDDC with a virtual GPU attached. Optimized the performance of the virtualization layer to achieve two simultaneous sessions on a single VM at 60fps each

VMware, Palo Alto - Staff Software Engineer

July 2016 - November 2018

I was a technical lead on the team that migrated the functionality of the vRealize Suite of products to the micro-services based cloud-native product Cloud Automation Services. In particular, I owned the framework that enables automated management of various cloud services from AWS, Azure, GCP etc…

  • Created a GoLang microservice which ran open source Terrafrom providers as a scalable cloud service that formed the backbone of our desired state configuration engine for public cloud services. This enabled us to roll out support for a new service in a matter of days instead of weeks

  • Contributed to and acted as a team-wide expert on our homegrown Java microservices framework Xenon (Now open sourced) that unified load balancing (Custom), asynchronous business logic (Netty) and data storage (Lucene+RAFT) into a single node which could then be scaled horizontally to meet demand

  • Authored Lucene document schemas that would be used for backend data storage to ensure that data could be queried as quickly as possible while maintaining clear architectural boundaries between components

  • Created a zero-downtime upgrade system that would allow data migration from blue to green nodes, while intelligently load balancing the traffic based on the "owner" of a document

  • Utilized 20% time to create a Virtual Reality application to enable visualization and management of the datacenter at scale (10,000+ VMs) which was presented on stage at VMWorld 2017 by our CEO Pat Gelsinger

VMware, Palo Alto - Sr. Member of Technical Staff

July 2014 - July 2016

I was part of the team that built vRealize Automation 7 and vRealize Code Stream. Our focus was on making the experience of managing the private and public cloud as seamless as possible and fully automating every part of the cloud-native application lifecycle.

  • Created a patent pending plugin based approach to CI/CD artifact management that would allow Code Stream to integrate with multiple repositories like Jfrog Artifactory

  • Open sourced the SDK for creating similar plugins so that other external partners (Puppet, Chef, ...) could extend CodeStream CI/CD pipelines with their own building blocks

  • Migrated UI framework from Google Web Toolkit to ExtJS and modernized the look and feel of the vRealize suite of products

  • Became a member of our University Propel mentoring program which involves working closely with a hand picked new college graduate over an extended six month internship with the goal of placing them in a full time position at VMware

Microsoft, Mountain View - Software Development Engineer 2

December 2011 - July 2014

I worked on the Microsoft Remote Desktop team building iOS and Android applications to give users on these platforms a first party experience. Prior to this my main focus was developing high speed image processing algorithms for the Remote Desktop Windows client.

  • Re-architected a number of legacy Win32 libraries to cross compile on Windows, iOS and Android to enable a cross platform Remote Desktop Client using one codebase

  • Created a framework to allow drawing and displaying user interface elements on both the iPhone and the iPad using a single codebase

  • Designed and implemented a suite of UI controls that would allow users to interact with keyboard/mouse driven windows desktop naturally while using a touch driven iOS device

  • Modified a platform independent HTTPS transport stack to support federated authentication using a proprietary authentication scheme which is slated for use across all our platforms

  • Developed patented high speed image processing algorithms for the Remote Desktop Windows client using Intel SSE3 vector instructions to enable doubling the compression of our codec with no subjective quality loss and a less than 1% increase in decoding time

Microsoft, Redmond - Software Development Engineer

September 2008 - December 2011

I worked on the Microsoft Windows Home and Small Business Server team building their Out Of Box Experience and then on the Microsoft Remote Desktop team using the Windows Azure platform to develop various components for our enterprise cloud services.

  • Implemented a network based setup for Microsoft Small Business Server comprising a web based front end driving an OS setup bootstrap which applied the settings over a reboot

  • Architected an HTML5 front end framework on top of Microsoft’s ASP.Net to enable writing an AJAX-enabled, multi-browser, lightweight UI that could be written using Microsoft tools

  • Wrote a build system on top of Microsoft Team Foundation Service optimized for building, automatically testing and deploying live services currently in use by a team of 30+ developers

  • Was part of a team wide initiative to come up with best practices and methodologies for agile development in order to enable Windows Server to build services with weekly release cycles alongside in-box products with yearly cycles from a common codebase

Platforms

Years of active development

Languages

Pluralsight percentile. Other languages without %tile: Objective-C, Swift

Frameworks

Pluralsight percentile. Other frameworks without %tile: ARKit, ARCore, OculusVR, SteamVR, OpenXR & GraphQL

Interests

LEGO
Sci-Fi & Fantasy
Motor Racing
Archery
Taiko

Education

Stanford Graduate School of Business, California

Stanford Ignite Accelerator
Graduated May 2018

Stanford University,
California

B.S. Computer Science with a minor in Economics
Graduated May 2008

Ibn Seena English High School, United Arab Emirates

A levels in Physics, Chemistry, Math & Economics
Graduated June 2004

Awards & Leadership

Patents: 6 granted, 3 pending

Stanford Undergraduate teaching Assistant: CS106a/b, CS107 and CS108

Stanford Student Government: Freshman Council Member

Youngest Member of MENSA in UAE: Joined In 2005 at age 19

Edexcel UK EXTRAORDINARY Achievement Award: Middle East Valedictorian

Ibn Seena English High School: Valedictorian

Sharjah Tennis Open Under 16: 2nd place winner

Links

Google Scholar
LinkedIn
YOUTUBE
Github

Contact

Email: arjundube@gmail.com
Cell: +1 650 353 1971
X: @Xrjuna