Adversarial attacks pose a significant threat to the reliability of AI-based malware detection systems. By subtly modifying malicious samples to evade classification, adversarial techniques can undermine even state-of-the-art models, exposing critical security gaps. This has motivated the development of robust defense strategies aimed at improving the resilience of malware detectors. In this seminar we will discuss common approaches including adversarial training, randomized smoothing, diversification and elimination of attack vectors. While these defenses can enhance robustness, they often introduce trade-offs in performance, generalization, or computational cost.