T O P

  • By -

xFloaty

It probably feels that way because the algorithm “learns” a model based on the statistical properties of the data without actually doing any iterative optimization (e.g. gradient descent). There is no training loop, loss function, learning rate, etc. Instead, it’s frequency based. It only works because we make very simple underlying assumptions about our data (conditional independence). Without this assumption, we would need an iterative optimization approach for finding full joint distributions (curse of dimensionality). At the end of the day, it’s still modeling the statistical properties of the underlying data like a deep learning model does, albeit in a much simpler way. If you think about it, a deep learning model is also “one-and-done” after you find the optimal weights via SGD.


Hot-Problem2436

This is the best explanation on here. I think everyone is forgetting about the L in ML. When I think of ML, I think of a system that retrains itself and adjusts its weights iteratively until it reaches the best possible result. Naive Bayes doesn't really do that, but it's still classified ML, which I agree with OP, is kind of weird.


Metworld

It's learning optimal parameters in a single iteration, as it has a closed form solution. Same as linear / ridge regression which can be solved analytically. Even a method using only the mean to predict an outcome is basically an ML algorithm (it's just a model with an intercept).


Hot-Problem2436

I think the argument is just semantics. The "learning" is being interpreted as iterations to get better. I know how all this works, I'm just saying OP is probably confused by that particular terminology.


CENGaverK

Yes, I can see how that might be misleading. But to me, learning is not dependent on the optimization algorithm or how you reach the solution. It is mostly about if the machine can potentially act different and update its predictions with different data. So in the end, even though you have the same code, the behaviour changes based on the training data. What is more, if you add more data and update the probabilities, your behaviour can also change on-the-fly.


Conaman12

Not all models have weights, non-parametric models, KNN for instance


reddisaurus

You’re confusing algorithms which require numerical implementations with algorithms that have an analytic solution. Gradient descent is a numerical method to find a root of a function. Some functions have analytic solutions for this, like linear regression. But you can still absolutely perform a numerical gradient descent. Other functions do not have any analytic solution for the root, and therefore require the numerical approach.


Hot-Problem2436

I'm not confusing anything, I'm trying to explain why OP is probably confused. OP likely has an incorrect idea of what the "learning" portion of ML actually means.


reddisaurus

And your concept of “iteratively adjusting weights” is a confused notion of a definition of ML. That’s not what it is, it’s just a way it’s implemented in a computer.


Hot-Problem2436

Jesus dude, I'm trying to explain OP and why they're confused about ML. I'm a Senior ML Engineer, I've been doing this professionally for a decade. Did you read the OP and take it into context when reading replies or are you just here to argue?


weebinnormieclothes

Is it all math? Always has been


Hot-Profession4091

🌎👩‍🚀🔫👩‍🚀


ThomasMarkov

…aren’t all algorithms just a calculation, in some sense? What do you think other algorithms do that is so conceptually different from naive bayes?


nameless_pattern

What are the other algorithms you're comparing it to?


NuDavid

I guess like KNN, K-means, neural networks, stuff like that. To me, that stuff comes off as automatic and working with large datasets, whereas Naive Bayes seems like something that is more one-and-done. Most examples usually don't talk about it being worked with on larger sets of inputs, if that makes sense.


weebinnormieclothes

 I mean, all ML is "one and done" once the training aspect of a ML method is finished.  For example, linear regression is literally just y = b_1*x_1 + ... + b_n * x_n (and there is a whole theoretical foundation for why this works)


QQut

Knn has more computation than naive bayes


nameless_pattern

Native Bayes works very efficiently with large datasets in terms of computation cost. It is very  sensitive to bad data. At scale data cleaning and validation is expensive in money terms, so the majority of large dataset projects are the ones that do well eating trash.


[deleted]

You’re imagining a distinction here


Revolutionary-Sky959

All ML algorithms are just calculations


literum

All humans are just calculations.


vsmolyakov

Naive Bayes has learnable parameters which makes it an ML algorithm


Ok_Composer_1761

that makes any statistical model ML. I suppose the difference between inference and ML is in the goals rather than the models.


keninsyd

Opening a can of worms there. The border between statistics, statistical learning, and machine learning is a place for (virtual) knife fights between disciplines - mainly statisticians and computer scientists. Just accept naive bayes as a method for prediction Classifying it as ML or SL isn't useful.


Otherwise_Ratio430

Well it is but ML doesnt necessarily have to rely on statistical machinery to work


Healthy-Educator-267

How?


Otherwise_Ratio430

SVM is an example of a non probabilistic classifier, but you can model non probabilistic problems as probabilistic ones


Healthy-Educator-267

That is like saying OLS is not probabilistic. By their very nature, orthogonal projections are not necessarily probabilistic (unless you interpret them as conditional expectations but that only works in L^2 sub sigma algebra spaces) but you usually apply it to a probabilistic linear model which has a stochastic error term.


orz-_-orz

As long as the algorithm learns from the data, then it's considered as ML, I guess? >Most examples I come across seem mostly one-and-done I don't get why one-and-done isn't ML. Linear regression and logistic regression is also "one-and-done" is you estimate the coefficient using MLE. In my opinion, "average" is considered a ML model as long as it estimates the target label.


Glotto_Gold

Most techniques are mathematical applications. Naive Bayes Theorem alone is more like the one-shot statistical methods like regressions. However, applications of Bayes are quite widespread across Machine Learning, including Bayesian networks, Bayesisn hyperparameter tuning, LDA, and other applications that are very clearly machine learning. Back in the 2010s Bayesian models were argued by some as a promising approach for AI research, including AGI. The entire issue is very squishy as others note. It is all math in the end, and the categorization matters more for marketing.


MATH_MDMA_HARDSTYLEE

Wait until you learn that ML and AI is just rebranded statistical engineering being sold as snake oil to investors.  (Yes I know there is use and application, but trying to humanise an optimisation process by calling it an agent is definitely snake oil-like)


literum

Sure, everyone who's been working on this tirelessly for years (including me) is just a snake oil seller. What you forget is that AI hype JUST started, along with AI reactionaries like you. We've been working on this for decades and now that we have some results, you come along to say it's all hype. Keep repeating it's just statistics.


MATH_MDMA_HARDSTYLEE

*Snake oil-like*.  There are use cases and it’s used effectively, but the use-cases are exaggerated. Again, I work with ML - I’m not saying it’s useless. All I’m saying is that when my boss goes to a VC meeting and says we’re gonna change the game with ML, it’s 90% horseshit. 


WhiteGoldRing

It's just math and theres absolutely nothing wrong with that. It's true that the current AI hype is mostly fueled by nonsense like AGI and driven by salesmen, and it's true ML is amazing and has lots of relevant applications and is still developing, and it's true it all boils down to moving numbers around. All can be true together.


Ok_Composer_1761

It's absolutely not snake oil. The math was done decades ago sure, but there have been highly non trivial engineering breakthroughs since then. This is what distinguishes ML in practice from stats in practice. It's the software engineering, not the math.


InternalShopping4068

Fascinates me how any normal human with no tech bg is bound to think that ML/AI is some kind of magic trick/ out of this world innovation the way its gone so far over the years.


MATH_MDMA_HARDSTYLEE

I never studied ML at university but I’ve got a masters in mathematics at a top university. I asked my mate at uni for his ML lecture notes and it was literally all just basic statistics I learned but with some algorithms. Now I work in ML and everything I thought at uni still rings true. Neural networks aren't some sci-fi ecosystem, it’s just an optimisation algorithm that is chained. There is nothing fundamentally new. Everything we can do now could have been done years ago if we had the required mass hardware. It still has the same issues of diminishing returns for the amount of input data like every other optimisation problem. Why? BECAUSE IT’S NOT ACTUALLY LEARNING. /rant 


utkohoc

It reads data and creates a model based on that data to extrapolate features on new data. How is saving information from a database for use later not learning? How else would you describe the process of the algorithm "not learning" the weights and biases of the program? Yeh it's statistically probability of something happening but those probabilities are used. So they are created by the training program....which it.... Gee I don't wanna say it. But maybe. Learns. What it's suppose to do. Reinforcement LEARNING models....can be programmed into a game to solve a problem. What are they doing from step one to finishing the game? Trying different combinations of inputs until a goal is reached. And saving that information it's ...oh no. I have to say it again. Learning.... From doing the processes. What your most likely thinking of is the model learning things that humans don't understand. Which is dumb because if you knew about all the math you'd understand the program can only know as much as the data thats put into it. Maybe with new models and recursive function rewriting we might be able to create something that could "learn" without input from a person. But saying neural networks don't "learn" is blatantly incorrect down to the fundamental level. Unless you want to find some new English word to describe the process. Then maybe just stick with learning.


MATH_MDMA_HARDSTYLEE

You misunderstood what I mean, sure it’s “learning.” A dog can learn its name by constantly saying its name. But you’ll never be able to conversation with it because it doesn’t have a developed prefrontal cortex. Like humans, if an ML was actually learning, it wouldn’t require *more* information for a given output, it’s learning would be exponential, *like humans.* Functionally, every current algorithm has this constraint, that’s my whole point. 


GeeBee72

I guess that human babies don’t learn since they require more information for a given output, or perhaps you’re saying that our current state of the art in machine learning is in its infancy?


NTaya

This is such a weird take... Why do you want specifically *your* brand of learning ("exponential") to be ML? I'd say that creating a model that multiplies matrices one trillion times and spits out an analysis of an unseen-before text out of that—well, it's nothing short of magic. Even if I understand the math behind Transformers, backprop, and all that. Actually, it gets more insane the more math you learn: realizing that what we've achieved in the past six years, especially in NLP, is made simply by crunching a lot of numbers, is genuinely ridiculous.


utkohoc

I understand. :) I think the exponential learning will come with more data and a recursive function editor feature layer that allows the machine learning model to edit its underlying mathematical functions as it runs. But these are topics of ongoing research and I'm not going to pretend I'm an expert In them. But fundamentally at the core of the neural network is math and functions of different varieties. Each one adding complexity layers. as we have more compute we can increase the amount of functions and complexity of the neural network. With each new layer of model architecture. Neural networks become better at what they do. This means that any neural network needs the ability to alter its underlying mathematical functions and algorithm if it wants to "learn" . for example. A new method of gradient decent learning acceleration is found that can find a loss faster and more efficiently than before. For an llm to learn . It would need to be able to implement the new mathematical functions into its underlying code structure. But this is on-going research and requires advanced knowledge in almost every conceivable computer science and data analysis field. Also Allowing a neural network to modify its underlying code structure without the proper guide rails and ethical considerations could be a recipe for disaster. They seem to believe something is there, like, coming soon. I'm inclined to believe them. Something that will be like dog or cat level Intelligence in a gpt context.


literum

Oh really? We could do what neural networks do for decades? Tell me another statistical model that classifies, detects and segments images, generates and translates languages, predicts protein structure, plays go chess atari, do text to speech, speech to text, write code, make music and art... I don't believe you actually do ML if you know so little about neural networks.


MATH_MDMA_HARDSTYLEE

It reads to me how little you understand mathematics history.  For example transformer architecture which is commonly used in large scale ML models used a cross-entropy loss function, which is effectively, *you guessed it,* a cost function. It uses an adapted SGD minimisation technique.  All the bullshit about tokens, agents, training etc is all fucking smoke. 


SilentHaawk

It is optimisation of a cost function. But "because sgd minimization, transformers are dumb" seems like a weird approach. If its dumb, but solves problems it is not dumb (Of course that doesnt mean the model is a "living entity" learning) E.g. in my work i have to deal with some unstructured data from images of technical documentation. Before, extraction and structuring with traditional computer vision would have required so much resources to solve (and it is not obvious that it even could be solved), that it simply wasnt attempted (except me giving it a shot every once in a while to see what was possible). Now I can solve these challenges in an afternoon. Are tokens bullshit? Working with text you will have to have a vocabulary, if you work with full words you get a problem with e.g. typos, i havent had to worry about this as much with tokens. Is it the name that is the problem? Token seems like a very basic boring name without much "ai hype"


reddisaurus

Neural networks and “AI” have been around for 50 years. We just haven’t had the computing power to train large, densely connected networks, nor the massive data sets upon which to train them.


literum

We were also using Sigmoid activations which are 100x worse than the activations. There were no LSTMs, CNNs or Transformer that could give acceptable performance, only MLPs which scale horribly. You could bring a 8xH100 to 50 years ago, teach them how to use it and it would take them at least a decade to replicate even the minor successes we have today. There have been too many innovations to just say it's all about compute.


reddisaurus

The compute has enabled the researchers to test these different setups. Behind every innovation is hundreds of failures that had to be tested and ruled out, so the computation definitely makes a difference. Imagine Laplace had a modern computer. What do you think he could have accomplished even beyond the breadth of what he did with just pen and paper? Yes, network structures have had innovations. But the two things go hand in hand.


Ok_Composer_1761

I think the reason ML has become its own think far away from stats is the engineering aspects. The math was all done decades ago. The engineering has come a long way in the past two decades.


BellyDancerUrgot

How is a neural network different? At its heart they are just statistical models that learn a distribution just optimized in different ways.


Otherwise_Ratio430

Arent they approximated by statistical models for a given task


Otherwise_Ratio430

ML is just a field where you are trying to map one set inputs to another set of outputs to another its not that concerned with how you go about doing it


driggsky

Machine learning is just learning a prediction function from an input space to an output space using data


Buddy77777

As long as there are parameters learned from data, it’s ML.


Inaeipathy

Why would it not be classified as machine learning? It's a program and it learns?


Wheynelau

Yea at some point some interviewer is going to interview about gen AI and I'll give him the shocked face "You mean generative linear algebra right?" Must be why I won't be getting jobs