Protecting the confidentiality and integrity of sensitive information is central to trustworthy computing. This project focuses on one aspect of the problem, namely, the difficulty of developing software that satisfies critical information flow properties. The approach of secure information flow analysis is to do a static analysis, usually in the form of a type system, on a program prior to executing it, with the goal of proving that it does not leak any information from its high inputs to its low outputs; this is formalized as a property called noninterference. But noninterference is widely recognized to be too restrictive in practice -- often we need to have low output that depends on high input. In implementations, such deliberate leaks of information can be allowed through an explicit declassify construct, which functions like a type cast to circumvent the typing rules. But declassification, while expedient, throws into question what is then ensured by the analysis.One promising disciplined approach to relaxing noninterference is to develop a quantitative theory of information flow that lets us talk about "how much" information is leaked. Such quantitative theories are being studied in a variety of contexts, including secure information flow, anonymity protocols, and side-channel analysis, and there is an emerging consensus to base such theories on the concepts of Shannon entropy and mutual information. But a useful theory of quantitative information flow must provide appropriate security guarantees: if the theory says that an attack leaks x bits of secret information, then x should be useful in calculating bounds on the resulting threat. Unfortunately, it can be argued that the standard theories actually fail to provide such guarantees, because a random variable can have arbitrarily large Shannon entropy even if it is highly vulnerable to being guessed.This project will therefore explore an alternative foundation for quantitative information flow based on a concept of vulnerability (closely related to Bayes risk) and which measures uncertainty using Renyis min-entropy, rather than Shannon entropy. The goal is to develop the new foundation both theoretically and practically. The main technical challenge will be to develop type-based static analyses that can be used to guarantee that programs satisfy desired quantitative information flow policies. More broadly, this project aims to help to enable the disciplined development of software with guaranteed information flow properties, and to educate students about programming for secure information flow.