Dear Alexis Apostolakis and Vasileios Botsos, My name is Samrat R M, and I am very interested in contributing to the GSoC project “Open-Source AI Framework for Thermal Satellite Payload Data Analysis.” *Briefly about me:* I have around 1.5 years of experience as an Associate Software Development Engineer at Swiggy. I am currently transitioning toward Machine Learning and studying Data Science at Scaler through an 18-month online program. I have attached my CV for more details. I recently started exploring the repository and attempted to implement Single Pass Uncertainty and Grad-CAM++ in the TIRAuxCloud project. PRs: - https://github.com/Orion-AI-Lab/TIRAuxCloud/pull/5 - https://github.com/Orion-AI-Lab/TIRAuxCloud/pull/4 I also opened an issue in the repository to clarify some doubts and better understand the expectations around the Grad-CAM++ feature. - https://github.com/Orion-AI-Lab/TIRAuxCloud/issues/6 I would really appreciate your guidance on a few points: 1. Feedback on the above PRs and whether the approach aligns with the project goals. 2. Are there any tasks or features you would recommend I work on before submitting the proposal. 3. My primary interest is in working on extensions related to uncertainty quantification and explainability. Could you please let me know your expectations for this direction? Also, would focusing on this area alone be sufficient for the proposal, or would you recommend additionally including components such as baseline and benchmarked ML/DL models or elements of a modular AI pipeline ? If possible, I would also appreciate clarification on the following topics: 1. For the modular AI pipeline, are there any specific frameworks or projects you would like me to use as inspiration for the design? 2. Regarding the expected outcome “Baseline and benchmarked ML/DL models,” I wanted to clarify the intended scope. Should the goal be to reproduce and evaluate the performance of additional architectures (e.g., HRCloudNet, DeepLabV3, BEFUnet, Swin-Unet, SwinCloud, Siamese, bam-cd) similar to how the repository currently reports results for models such as CDnetV2, SegFormer, SwinCloud, and U-Net in the results module, and then compare their performance using the same evaluation metrics? I’m very excited about this project and the opportunity to contribute. Any feedback or direction would be extremely helpful as I work on my proposal. Thank you for your time. Best regards, Samrat R M
Attachment:
Samrat_RM_SDE.pdf
Description: Adobe PDF document
---- Λαμβάνετε αυτό το μήνυμα απο την λίστα: Λίστα αλληλογραφίας και συζητήσεων που απευθύνεται σε φοιτητές developers \& mentors έργων του Google Summer of Code - A discussion list for student developers and mentors of Google Summer of Code projects., https://lists.ellak.gr/gsoc-developers/listinfo.html Μπορείτε να απεγγραφείτε από τη λίστα στέλνοντας κενό μήνυμα ηλ. ταχυδρομείου στη διεύθυνση <gsoc-developers+unsubscribe [ at ] ellak [ dot ] gr>.