Project 1.1 - Authentication Core
The TSCP paradigm calls for extensive checking of computations and communications. Checks include deep authentication, and data and command provenance. A first and essential part of this project is to identify which authentication linkages are needed (potentially trailing back from the process requesting a command through processes which spawned it, to communications that lead to the spawning to the owner of the process generating the communication, to biometrics on that user.) The space of options is large, and we must focus in on the chaining relationships which make the most sense and can be efficiently supported. The same is true of digital artifact provenance relations. Similar kind of chained recording is necessary; the question first is what should be recorded. A second question for both problems is implementation.
Project 2.1 - Authentication Trust Technology
The deep authentication and digital provenance efforts of Theme 1 will depend on application of cryptographic signatures. The choice of framework for implementing those signatures and managing their keys is a serious one. One can imagine using standard PKI certificates, but looking ahead to TSCP integrating many systems with the need for cross-system validation, one foresees exactly the same problems that limit PKI in enterprise systems. We will investigate alternative means with promise of better scaling, using technologies such as Google’s macaroons or some application of blockchains.
Project 3.1 - Standards for interaction of COTS devices with TSCP
There will always be contexts where a device that has not been developed to TSCP specifications will have to interact with a CPS protected by TSCP. Legacy devices are a good example, and are a class of devices that are so important that Thrust 1 devotes effort to developing policy governing how they interact with the rest of the system. A harder problem is to accommodate the reality that common-off-the-shelf devices will have to be brought to a TSCP protected system to monitor it, to do diagnostics, and to provide maintenance support. In the future, one can even imagine apps running on tablets that routinely interact with the TSCP protected system, but also have connection elsewhere, potentially even in to the broader Internet. The objective of this project is to determine what standards (or applications) must be imposed on commodity products to play a limited role in a TSCP protected CPS.
Project 3.2 - TSCP System Verification
To meet the highest standards of assurance, dependability, and security, the ultimate goal of future TSCP systems is formally guaranteed correctness. That is, to produce mathematically-grounded certification that the final TSCP system indeed satisfies the formal requirements of its application domain. New formal methods research is needed to develop techniques for complex, large, distributed and decentralized systems as required by TSCP. While there are isolated techniques that can be used to verify a program in a given language, and techniques that can be used to generate provably correct programs for some isolated domains, there is currently no general approach specifically crafted to formally guarantee, statically, that a runtime monitored or verified system will execute correctly. We believe that such a verification technology is timely and critical for future systems, and in particular for TSCP.
Project 4.1 - Data Analytics and Alert Response
TSCP protects the cyber-physical systems underpinning critical infrastructures; these continuously generate massive quantities of data. Analytics over this data is key for identifying many kinds of security issues. Affordable analytics in real time is the key enabler for real-time TSCP analytics, at scale, which will allow real-time intelligent alerts. In this project, we aim to enable this new capability by exploiting maturing programmable hardware technologies to push real-time analytics out of back-end servers and into front-end data collection devices, enabling a massive reduction in data transmission and reducing response time for anomalous events. We will exploit the new hardware now becoming available in clouds (GPU, FPGA) to cut the cost of key mining tasks for our applications, especially for deep learning models and difficult types of data.