Intelligence, Surveillance, & Reconnaissance

The ISR division of Etegent Technologies advances technology in the areas of machine learning, novel algorithm development, data analytics and visualization to create useful tools that benefit its customer base in the Department of Defense and the Intelligence Community.

We solve complex problems for intelligence groups involving signal processing, signature modeling, target detection and code optimization. The ISR division spans the spectrum of high-end academic research to end-user implementations; we have a demonstrated ability to operate on the cutting edge of research while also delivering pragmatic tools to improve end-user capabilities.

Machine Learning
The machine learning group focuses primarily on computer vision tasks within the ISR community. We focus on creating effective, pragmatic algorithms incorporating cutting edge research that are utilized by end users such as intelligence analysts. Two things are required to ensure we accomplish this goal: we must stay up to date on the cutting edge of machine learning and we must stay connected to our end users. We stay up to date by hosting a weekly ML club meeting to review the latest papers and ideas in the machine learning and deep learning world. To accomplish goal two, we make sure to have recurring meetings with end users and analysts to provide consistent feedback as to how our models can be more user friendly and applicable to their tasks.
Intelligence Tools

The Intelligence Tools team develops and delivers tools to aid analysts within the ISR community. Areas of particular note include the analysis of structured observation management (SOM) data in a spatio-temporal intensity map frame of reference, semi-automated processing of hyperspectral and infrared data sources, and tooling to support truthing and development of machine learning models. Tools are developed in a user-centric manner, with a focus on understanding the needs of the end user, providing responsive interfaces, and optimizing algorithm runtime to maximize the support provided to analysts.

Modeling and Simulation Group

Historically, sensor exploitation systems have been designed in isolation to support a specific range of operating conditions and predetermined sensor payload. As adversary countermeasures and capabilities evolve, this monolithic single sensor platform for a specific job is no longer tenable. Hence, future missions will utilize dynamic configurations of sensors – each with unique capabilities and associated cost. The DoD/IC community needs to develop the fundamental science required to address the additional complexity of re-configurable, re-taskable sensors to best utilize the available resources to enhance mission success. Some of the key developments necessary to support the new sensing paradigm are performance modeling, sensor fusion (decision/feature/handoff), sensor simulation, machine learning, and war gaming via system of system simulations. The performance modeling team leverages deep understanding of sensor phenomenology to train and characterize machine learner and statistical pattern recognition algorithms to provide tools to enable analysts, decision makers and warfighters to more effectively employ autonomy in realizing their mission goals.

Performance Computing
Performance computing encompasses delivery of the highest throughput and computational performance of ISR algorithms, tools, and technology on numerous types of computing platforms while meeting any number of deployment requirements including size, weight, and power (SWaP). Example computing platforms include massively parallel super computing clusters; parallel processing desktop and multi-core server computers; highly parallelizable, programmable hardware such as field programmable grid arrays (FPGAs) and graphics processor units (GPUs); and the latest low power, high performance embedded systems such as NVIDIA Xavier AGX. Deployment also includes developing and implementing optimizations at various levels of the computing stack from CPU profiling, to utilizing specialized performance libraries, to modifying the underlying mathematics to achieve identical outputs with the highest performance.