A place for my wonderfully dumb thoughts. If you’re a recruiter and you somehow end up on this page, please don’t base your decision on this content, please!!
The recent ones,
I got this thought during one of my DL classes, specifically during Optimizers. Could we use global minima-finding optimizers to identify flat surfaces on a terrain map, such as land, mountains, and other geographical features?
Edit: Given a terrain map, maybe using the GD-based optimizers to find the optimal path to construct roads from the hilltop to the bottom since such optimizers prefer smoother grads for better generalization over sharp changes.
Just like in circuitry, is it possible to manually switch neural paths on or off in Deep Neural Networks (DNNs)? I understand that networks implicitly handle this using weights, such as activating certain connections when detecting Object A at scale S1 versus Object B at scale S2. However, is it possible for a user to manually control these connections and experiment with them, similar to how we manipulate circuits on a breadboard to get deterministic outputs?
Is it possible that a model has learned a mathematical concept or formula in its latent space that is not yet known to humans? While we feed networks with whatever we know, are we learning from what the network knows? Maybe something on the lines of exploratory search for new theories and concepts using AI, idk.
More soon
If you know the answer to any of these, please email me and make me wiser!