Mohammad Mohammadi Amiri

 

Assistant Professor
Department of Computer Science
Rensselaer Polytechnic Institute (RPI)


Contact

MRC 331A
Troy, NY 12180, USA
[Email]
[Google Scholar] | [LinkedIn] | [ResearchGate] | [X (Twitter)]


Dr. Amiri was appointed an Assistant Professor in the Department of Computer Science at RPI in Fall 2023. His research is mainly dedicated to advancing artificial intelligence through the strategic use of data. In today's rapidly evolving technological landscape, the ability to harness and optimize data is key to unlocking the full potential of intelligent systems. Dr. Amiri's research focuses on leveraging data to enhance artificial intelligence's capabilities, aiming to create systems that benefit everyone His primary research interests include large language models, data valuation, federated learning, and deep learning.

Dr. Amiri received the Ph.D. degree in Electrical and Electronic Engineering form Imperial College London in 2019. He further received the M.Sc. degree in Electrical and Computer Engineering from the University of Tehran in 2014, and the B.Sc. degree in Electrical Engineering from the Iran University of Science and Technology in 2011, both with the highest rank. He is the recipient of the Best Ph.D. Thesis award from both the Department of Electrical and Electronic Engineering at Imperial College London during the academic year 2018-2019, as well as the IEEE Information Theory Chapter of UK and Ireland in the year 2019. Also, his paper titled "Federated learning over wireless fading channels" received the IEEE Communications Society Young Author Best Paper Award in the year 2022.

For motivated Ph.D. students with strong mathematical and analytical skills: Please send your CV to the above email address if you are interested in Machine Learning and comfortable with programming.

Research


  • Large language models (LLMs)

  • In the era of artificial intelligence (AI) and natural language processing, large language models (LLMs) have become transformative tools, reshaping how we interact with and extract insights from vast amounts of textual data. These models, with their immense scale and capabilities, are central to a wide range of applications, from chatbots and virtual assistants to content generation, translation, and information retrieval. However, the effective deployment of LLMs presents significant challenges, particularly in terms of efficiency, memory usage, alignment, data management, and reasoning. Our research group focuses on optimizing the resource-intensive process of fine-tuning LLMs to enhance efficiency in terms of time, energy, and computational resources. We are also addressing the challenge of memory efficiency during LLM inference, ensuring these models can be deployed effectively even in resource-constrained environments. Moreover, we are dedicated to the critical task of LLM alignment, both through theoretical analysis and empirical studies, to ensure that these models behave in ways that are consistent with intended goals and ethical standards. Our research also includes data valuation methods to prioritize high-quality data in LLM training and investigates how LLMs can be utilized to evaluate and enhance the quality of data, creating a feedback loop that benefits both the models and the data they rely on. Additionally, we work on improving the reasoning capabilities of LLMs to ensure they generate complex, contextually accurate responses.


  • Data valuation

  • Data is the main fuel of the modern world enabling AI and driving technological growth. The demand for data has grown substantially, and data products have become valuable assets to purchase and sale since it is extremely valuable for sectors to acquire high quality data to discover knowledge. As a valuable resource, it is important to establish a principled method to quantify the worth of the data and its value for the data seekers. This is addressed via data valuation which is the essential component for realization of a fair data trading platform for owners and seekers.


  • Federated learning/Privacy-preserving machine learning

  • Today a tremendous amount of data is being generated at different network layers that can be utilized to improve the intelligence of many applications. Privacy-preserving machine learning techniques, such as federated learning, have emerged to exploit the decentralized data while the data stays local to the owner’s device. Specifically, federated learning aims to fit a global model to the decentralized data available locally at the edge devices. The devices receive the global model from the server, update it using their local data, and share the local updates with the server. The server aggregates the local updates, where a common aggregation rule is averaging the local updates, and updates the global model. Besides its applicability, this algorithm imposes various challenges inclduing participaton incentivization, privacy concerns, heterogenous data distribution, model heterogeneity, communication overhead, etc.


  • Deep learning

  • The theory of deep learning involves a comprehensive study of the fundamental principles and mathematical foundations that underlie neural networks. This includes understanding the intricacies of network architectures, optimization algorithms, regularization techniques, generalization properties, and how these components contribute to the learning process within deep learning models. The goal is to elucidate the theoretical aspects to improve model interpretability, robustness, and overall performance, advancing the field and its applications.

Publications

Members

Awards

  • IEEE Communications Society Young Author Best Paper Award (2022), paper "Federated learning over wireless fading channels", IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3546-3557, May 2020.
  • Best PhD Thesis (2019), IEEE Information Theory Chapter of UK and Ireland.
  • Eryl Cadwallader Davies Prize - Outstanding PhD Thesis (2019), EEE Department, Imperial College London.
  • EEE Departmental Scholarship (2015 - 2019), Imperial College London.
  • Excellent Student (2014), Ranked 1st among all M.Sc. students in ECE Department, University of Tehran.
  • Excellent Student (2011), Ranked 1st among all B.Sc. students in EE Department, Iran University of Science and Technology.

Invited Talks

  • Collective intelligence, Bell Labs, May 2023.
  • Decentralized data valuation, Decentralized Society + Web3 Research Panel, Media Lab Fall Meeting, Massachusetts Institute of Technology, Oct. 2022.
  • Federated edge machine learning, Keynote speaker at Futurewei University Days Workshop, Aug. 2021.
  • Federated edge learning: advances and challenges, Keynote speaker at IEEE International Conference on Communications, Networks and Satellite (COMNETSAT), Dec. 2020.
  • Federated edge learning: advances and challenges, King's College London, Nov. 2020.
  • Federated edge learning: advances and challenges, University of Maryland, Oct. 2020.
  • Federated edge learning: advances and challenges, University of Arizona, Oct. 2020.
  • Federated learning: advances and challenges, Virginia Polytechnic Institute and State University (Virginia Tech), Oct. 2020.
  • Fundamental limits of coded caching, Ohio State University, Nov. 2016.
  • Fundamental limits of coded caching, Stanford University, Nov. 2016.

Academic Services

  • Technical program committee
    • IEEE International Conference on Distributed Computing Systems, 2024.
    • IEEE Global Communications Conference (Globecom), Wireless Communications for Distributed Intelligence (WCDI), 2023.
    • IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2022.
    • IEEE International Conference on Microwaves, Communications, Antennas Electronic Systems, 2021.
    • IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2021.
    • IEEE International Conference on Communications (ICC) Workshop - Edge Learning for 5G Mobile Networks and Beyond (EdgeLearn5G), 2021.
    • IEEE International Conference on Communications (ICC): SAC Machine Learning for Communications Track, 2021.
    • IEEE International Conference on Communications, Networks and Satellite (COMNETSAT), 2020.
    • IEEE Global Communications Conference (Globecom), Cognitive Radio and AI-Enabled Networks (CRAEN), 2020.
    • Conference on Information Sciences and Systems (CISS), 2020.
    • IEEE International Conference on Communications (ICC) Workshop - Edge Machine Learning for 5G Mobile Networks and Beyond (EML5G), 2020.

Openings

I am looking for motivated Ph.D. students with strong mathematical and analytical skills. Please send me your CV if you are interested in Machine Learning and comfortable with programming.