# hi!!
I'm Kamilė Lukošiūtė^[ ![[L1003459.jpg | 200]]].
I'm a Research Scholar at the Centre for AI Governance. I think about the implications of AI for cybersecurity, including how adversaries will use frontier models, how defenders can benefit, and what this all means for critical infrastructure and societal resilience.
Contact me at `kamile [dot] lukosiute [at] governance [dot] ai`.
### blog posts about ai
- [[Design for the defenders you care about or risk being useless]]: On operational constraints of critical infrastructure providers and the limits of AI-powered vulnerability discovery
- [[Building evaluations for cybersecurity assistance]]: A short post work-in-progress post.
- [[You need to be spending more money on evals]]: On over-generalisation in LLM evaluations
- [[Maybe don't give language models standardized tests]]: On misleading LLM evaluations
### blog posts about other topics
- [[American Character]]: On the best things about America
- [[Neutron star mergers and fast surrogate modeling]]: On my astrophysics research
- [[What does BCEWithLogits actually do?]]: On technical details
- [[When can a tensor be view()ed?]]: On more technical details
### other past work
- I was an AI Security Researcher at Cisco Security, where I worked on uses of LLMs in things like malware attribution.
- I was a resident researcher at Anthropic, where I worked on scalable oversight and evaluations. One of my publicly available projects was on [model-written evals](https://arxiv.org/abs/2212.09251).
- I was a PhD in machine learning for theoretical physics but dropped out to help to make AI's safe.
- During my bachelor's and master's in physics, I published a paper on [training cVAE surrogate models for kilonova parameter inference](https://arxiv.org/abs/2204.00285).
### contact!
[[CV.pdf |CV/Resume]]
[twitter](https://twitter.com/kamilelukosiute)