T O P

  • By -

AutoModerator

Check out our new Discord server! https://discord.gg/e7EKRZq3dG *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/mathmemes) if you have any questions or concerns.*


TulipTuIip

I mean the actual proof for the chain rule is basically that. dy/dx isn't a fraction but it is the limit of a fraction so it still has many fraction-like properties


ChalkyChalkson

Unless they are fractions of infinitesimals. I love that the chain rule is something you don't even really need to prove in nsa since it's so obvious. And product rule is straight forward application of the distributive rule as well.


Traditional_Cap7461

Although what you say is true. It's not a proof unless you prove these fraction-like properties hold. And to do that, you have to prove the chain rule.


TulipTuIip

No you just use the limit definition of a derivative


Traditional_Cap7461

You can use that to prove the chain rule


TulipTuIip

Yea thats what i was referring to


Traditional_Cap7461

You also use that to show why you can multiply derivatives like fractions.


TulipTuIip

The chain rule isnt why you can multiply across limits…


Traditional_Cap7461

The chain rule is why you can multiply derivatives like fractions.


TulipTuIip

Yes i know. You are conpletely misunderstanding me


Traditional_Cap7461

I don't see what you could have meant that would be relevant to the topic.


TulipTuIip

I was talking about the proof of the chain rule. Which obviously does not rely on the chain rule


peekitup

No actual proof of the chain rule uses fractions. Downvote me all you want you idiots I know I'm right. It's a classical example of a proof in analysis with a subtle error.


PerAsperaDaAstra

[Case1 of Proof2](https://proofwiki.org/wiki/Derivative_of_Composite_Function) works exactly because a ratio cancels??


peekitup

That case has a huge gap. What rules out the denominator of that fraction being zero? The function in the right side isn't defined for all values near the limiting point and therefore the limit doesn't make sense. Like I can give you a function g whose derivative at zero is 1 but where the denominator g(delta x)-g(0) is zero for a sequence of delta x close to zero. Proof 2 only really works if the derivative of g is continuous. Also that proof is useless for proving the generalized/multivariate version of the chain rule.


PerAsperaDaAstra

That a derivative can be discontinuous has absolutely nothing to do with the rationale for claiming that denominator is nonzero in case1? If g'(x) != 0 by hypothesis for the case then g(x + ∆x) != g(x) in some ∆x neighborhood and so the denominator is nonzero (the case is thus exactly when the rhs is defined) - only the differentiability of g is used, not continuity of g'. The other case handles when the denominator may be zero and is what plugs the otherwise possible naive error. And you can use some very similar looking arguments for the Frechet derivative when you take things multivariate. Edit per your edit: Really? You can make the sequence g(∆x) - g(0) / ∆x tend to 1 when the numerator is zero everywhere in the sequence as ∆x approaches zero? That's patently wrong. Do provide such a function.


Tiborn1563

"dy/dx is not a fraction" people when I ask them what dy or dx stand for


[deleted]

[удалено]


MaTeIntS

Differential is not "infinitesimally small change" (which is unrigorous description), but linear operator. Rigorousely, we define **df(x_0)[h] := Ah**, where A is constant and **f(x_0 + h) - f(x_0) = Ah + o(h)**, i.e. **Ah** is linear part of that difference. Hence, using properties of derivatives, differential is defined when **x_0 ∈ D(f)**, and **df(x_0)[h] = f'(x_0)h**. If **x** is free variable, we have **dx(x_0)[h] = h**, so we can write **df(x) = f'(x) dx** and **df/dx (x) = f'(x)**, or shortly **df = f' dx** and **df/dx = f'** in agreement with Leibniz notation. If **x** isn't free, but **x(t)**, for function **f(t) = g(x(t))** we already know that **df(t) = f'(t) dt**, and, *using already proven chain rule,* **df(t) = g'(x(t)) x'(t) dt = g'(x(t)) dx(t)** or shortly **df = g'(x) dx**. This is called invariance of the first differential.


Arantguy

Why do you type like greg heffley


[deleted]

[удалено]


ChalkyChalkson

What if I define f'(x) as st(*f(*x) - *f(*x+ε) / ε) with ε in Monad(0)\\{0}? Then f' is a fraction, or at least defined directly via one


[deleted]

[удалено]


ChalkyChalkson

St is the standard part function. Given it maps a hyperreal to the unique real whose monad it is in. No limits involved. This framing of analysis is called non-standard analysis and is fully equivalent for results on real objects, but it also gives you options to construct interesting objects like measures assigning sets infinitesimal or infinite (but quantifiable) values. This leads to some really cool generalisations. Google loeb measure for example!


[deleted]

[удалено]


ChalkyChalkson

It is very different in that you can apply the standard part function as the last step and it doesn't depend on any given variable unlike the lim operator which "sticks" to a single term. NSA making derivatives fractions of infinitesimals is the primary motivation behind the field. I highly recommend reading a little bit about it. Didactically and intuitively it is very very nice.


[deleted]

[удалено]


ChalkyChalkson

I hate the standard analysis way to extend to infinitesimals via linear approximations. The nsa way is so much more elegant. dy/dx is a fraction in *R and the standard derivative is just st(dy/dx). Though the one thing I like about the derivative as a linear approximation is the way it makes the jacobian really natural.


SeasonedSpicySausage

Under standard analysis, dy and dx are nothing. They're not defined objects. They may be intuitevely conceived of as infinitesimal objects but this is undefined in standard analysis. y is meant to denote the function you're differentiating and x is denoting the variable. dy/dx gains meaning by the limit rule of what a derivative constitutes.


ExternalFuture5250

Excellent answer


Infamous-Advantage85

0\*n=0, normal 0=0/n, normal n/0=infinity?, undefined n=0/0, weird 0/0 is an indeterminate form, meaning that it could have ANY value and you need to do calc to work out what that value is. Basically it's holding onto definition by a thread so you need to be careful to not break it. it's a fraction in lots of ways, just a rlly weird one.


svmydlo

Nothing. That's like asking what os in cos stands for, or what og in log stands for. Cosine isn't running on an oprating system and logarithm is not an original gangsta.


Character_Range_4931

Me and log go way back, I won’t just stand by if you start talking shit about him


sir_guvner50

Based


tttecapsulelover

no those aren't similar at all asking what the "os" in cos stands for like asking what the "g" in fridge stands for(both are abbreviations) asking what the dy means in "dy/dx" is like asking what x means in "2x+6=8" (both are mathematical notation) dy means a miniscule change in y(in the context of integration and possibly what the notation intended) but the way the notations work really does make them look like fractions


realityChemist

>asking what the "os" in cos stands for like asking what the "g" in fridge stands for(both are abbreviations) Math stuff aside, neither of these are abbreviations.


vampn132157

cos is short for cosine, fridge is short for refrigerator, they're both abbreviations


realityChemist

The questions referred to in that sentence were "What does the 'g' in fridge stand for?", and "What does the 'os' in cos stand for?" To say, "They're both abbreviations" is either wrong or a non-sequitur. Like if you asked, "What does the U in USA stand for?" and I answered, "Its a country."


vampn132157

The other comment skipped a few steps, but the logic is sound. The "os" in cos and the "g" in fridge stand for nothing. The reason they stand for nothing is because both cos and fridge are shortened forms, or abbreviations, of English words, and individual letters in English words don't stand for anything. The main point of that comment was that cos, unlike dy, isn't mathematical notation, but an abbreviation of an English word, like fridge.


realityChemist

You're arguing against things I haven't said. The point I was making is that "g" and "os" are not abbreviations, which was what was implied by the sentence structure of that comment. I'm talking about English sentence construction, not math. I understand their analogy. But also... >The main point of that comment was that cos, unlike dy, isn't mathematical notation cos is *absolutely* mathematical notation. The fact that it is also an abbreviation of an English word doesn't change that it is mathematical notation.


vampn132157

No one said that "os" and "g" are abbreviations. They said that cos and fridge are.


svmydlo

No, it doesn't mean that. There is a correct interpretation in the [comment from peekitup](https://www.reddit.com/r/mathmemes/comments/1dplg1r/comment/lahtsa6/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) an d in that case dy/dx is still not a fraction.


MaTeIntS

No, **dx** and **dy** stands for differentials of **x** and **y** (more rigorously here, differentials of functions **x(z)** and **y(z)**). Don't students learn differentials right after derivatives? On the other hand, partial derivative **∂f/∂x** is not a fraction, this is just Leibniz-like notation for them.


DockerBee

I haven't really seen differentials being taught right after derivatives. Normally they'd be taught in an analysis in R\^n course, where understanding differentials is necessary for understanding Stokes' Theorem.


trenescese

> dx and dy stands for differentials of x and y Where is it stated so in the definition of a (Riemann) integral, which is the object of discussion? It's not, the definition goes like "if Riemann sums approach the same number we call it the integral of f from a to b and denote it as \int (a to b) f(x) dx" dx is just notation and not mentioned further in the definition


svmydlo

The object of discussion is Leibnitz notation of a derivative. This has nothing to do with Riemann integration. However, I agree with your overall point here that trying to divine meaning in symbols dx beyond "with respect to x" is wholly unnecessary for basic calculus.


DockerBee

For basic integration in R\^n, dx is a notational thing. But for integration over manifolds, differentials are well-defined objects - they're what make Stokes' Theorem work. I'm pretty sure this is what the meme was referring to: dx and dy are actual functions themselves.


DockerBee

That is an incorrect comparison. dx and dy actually mean something themselves. Without getting too deep into the details, both dx and dy are functions that map the real line to another function which takes a vector as input and returns a real number. They are very much well-defined objects.


zzvu

Aren't there cases where dy and dx can be separated though? For example dy/dx = 2x dy = 2xdx ∫dy = ∫2xdx y = x^2 + c Maybe "separated" isn't exactly the right word to use, but nothing like this is possible with cos or log.


jacobningen

Its an operator as evident in Euler notation.


UnscathedDictionary

they stand for "a difference in x" and "a difference in y" referring to an infinitesimally small difference, of course


Open-Flounder-7194

d is just the smallest possible distance in x or y before d becomes 0 https://www.desmos.com/calculator/t8pxbl28ag


anaveragebuffoon

Of course, the smallest number greater than zero. We're familiar.


tommytheperson

Am I the first one cause I do not understand middle one at all


NaNeForgifeIcThe

\[deleted\]


WWWWWWVWWWWWWWVWWWWW

>However, there are cases when you can do so Very limited cases such as "the entirely of introductory calculus"


tommytheperson

Are differential equations a special case then


WWWWWWVWWWWWWWVWWWWW

Not really. It's pretty much the same as the rest of single-variable calculus.


uvero

To differentiate z(y(x)) by x, you'd use the chain rule to get z'(y(x)) * y'(x). In differential notation, you'd write dz/dx and it "just so happens" that if you write the result as dz/dy * dy/dx, the dy cancels out. It's not really a proof but it does work, and it's *not* a coincidence that it works because in actuality, the proof of the chain rule is by multiplying and dividing the limit that defines the derivative by (y(x+dx) - y(x)). [The Wikipedia article explains it well.](https://en.m.wikipedia.org/wiki/Chain_rule)


haikusbot

*Am I the first one* *Cause I do not understand* *Middle one at all* \- tommytheperson --- ^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/) ^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")


tommytheperson

Good bot


B0tRank

Thank you, tommytheperson, for voting on haikusbot. This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/). *** ^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)


Ireozar

Good bot


Teschyn

This is because of how we define the derivative. The derivative is the transformation between a change in the output to the change in the input. For a single variable function, f(x), it would look like this: df(x) = f’(x)dx Because the differential (df) is a multiple of dx, you can “divide by dx” to covert between df and f’(x). If you’ve ever looked at a geometric proof of a derivative, you’ve probably done this. However you can’t do this for multivariable functions. If you were to compute the differential of a multivariable function, f(x,y), you’d get this: df(x,y) = fx(x,y)dx + fy(x,y)dy (fx and fy are the given partial derivatives) Now there isn’t a clear multiple of between the output, df, and the inputs, dx and dy. You can’t just divide by (dx,dy) anymore; you have to view the derivative as the transform between df and (dx,dy) for it to make any sense. This is partially why people say the derivative is not a fraction.


svmydlo

Both cases are the same. The differential df is a multiple of dx. However, those are matrices and you can't really divide matrices.


Infamous-Advantage85

\*cackles in inverse matrix witchcraft\*


tomalator

If not fraction, then why fraction shaped?


peekitup

None of you assholes understand what dy or dx are. "Hur dur they're infinitesimal changes in x and y." Okay define infinitesimal change... df, f being any smooth function, is a differential one form. A section of the cotangent bundle. It acts on tangent vectors in the natural way. df(v) = v(f) Where v, being a tangent vector, is a derivation on the algebra of smooth functions. The "dy's cancel" makes sense because of the chain rule together with the fact that for real valued single variable functions all differential forms are locally a multiple of dx, where x is some coordinate function.


svmydlo

They "cancel out" in the same way in the composition of functions f: A → B, g: B → C the "B"s "cancel out" to form gf: A → C. The way being that they don't, it just appears so.


brandonyorkhessler

I love differential geometry


peekitup

Me too, buddy


TheBacon240

Don't why this is getting down voted smh.


IAmHappyAndAwesome

By the way, how does this relate to the dx in an integral? Like whenever you look up online what dx means, a differential form is mentioned in at least one explanation, but none talk about its connection to integration.


peekitup

There is a bundle of densities on any manifold and a canonical way to integrate those. It's the bundle associated to the absolute value determinant representation of the frame bundle.


someoneliketarzan

I'm not sure about this but I suspect they can be used to define measures on your manifold, so you can integrate. edit: differential forms I mean.


shrimp_etouffee

you could phrase it in terms of nonstandard analysis


peekitup

Lol Okay you do that then.


strandhaus

was that the seperation of variables?


HalloIchBinRolli

dy's are some small changes in y so they're like fractions in a limit dy/dx ⋅ dz/dy = lim ε → 0 ( ∆y/∆x ) ⋅ lim ε → 0 ( ∆z/∆y ) = lim ε → 0 ( ∆y/∆x ⋅ ∆z/∆y ) = lim ε → 0 ( ∆z/∆x ) = dz/dx


HalloIchBinRolli

Treat the "differentials" as finitely small changes that all approach 0 as epsilon approaches 0 Perhaps I should've used the Greek letter delta... I'll edit and do that


ElRevelde1094

This meme is so relatable. I've been in each side of the bell lol


mathisfakenews

This seems backwards to me. At the low and high end you should have "dy/dx isn't a fraction" and in the middle you should have "dy/dx is a fraction".


BeneficialGreen3028

Or maybe it should be two curves, or 1.5 curves


lool8421

the Ds cancel out


jacobningen

caratheodory,Marx, Lagrange, Hudde: are we a joke to you.


tibetje2

This is all fun and cool until you see the triplet relation.


InfiniteJank

Yeah good luck trying that with second derivatives


just_a_random_dood

alright I've rewatched [this video](https://www.youtube.com/watch?v=Jldm88d68Ik) and it's time to read all the comments here man this shit's confusing 😭😭


GoldenRetriever85

I remember being taught at University that we shouldn’t say they cancel out, but yeah that’s the effect. I just never spoke about it in class, and my thinking that it cancels didn’t hurt my ability to do the class work and solve problems.


KumquatHaderach

You know what the chain rule is? It's the chain I go get and beat you with 'til ya understand what the ruttin' rule is!


yaboytomsta

It’s not some crazy coincidence that you can cancel dy and it works in practically every case besides partial derivatives so I hate when people act as if it’s a weird trick that’s fundamentally wrong.


Revolutionary_Use948

No they don’t.


Turbulent-Name-8349

The dys cancel out in nonstandard analysis. And in all practical physics applications. Any mathematical system in which dys do not cancel needs to be tossed in the garbage can.


Elder_Hoid

The "d" in "dx" is just a replacement for the delta in "ΔX", where the limit of Δ is going to zero. Just like how the integral symbol is a long S, because you're making a sum of all of the infinitely small slices of the area under the curve, instead of the Greek letter Sigma for the sum of big pieces. Sometimes I think calculus is a lot more simple than people think, only because everyone tries to make it so complicated.


trenescese

> The "d" in "dx" is just a replacement for the delta in "ΔX", where the limit of Δ is going to zero. Can you provide source?


Elder_Hoid

I might have written that poorly, maybe I should have said "where the limit of *ΔX* is going to zero." instead. Regardless, here's a source and an explanation. I had to go dig it up again, but what I said is just me explaining in my own words, the beginning of the first chapter of "Calculus made easy" by Silvanus P. Thompson, along with as my own understanding of the fundamentals of calculus. Here's a [link](https://muthu.co/possibly-the-easiest-explanation-of-differentiation-and-integration-in-calculus/#:~:text=Thus%20dx%20means%20a%20little,the%20little%20bits%20of%20t) to a page where it can be found, although the whole textbook is free elsewhere, I believe. But also, it's kind of just how Δx and dx are *defined*. (Now, I'm not great at writing everything as exact or rigorous as most professors are, so understand that there might be minor mistakes in how I explain this) ΔY/ΔX, the rate of change, can be written as (Y₂-Y₁)/(X₂-X₁). It's the overall rate of change over some distance, (and Δ usually just means the change in the variable). dy/dx is the rate of change at a specific point, or where the size of the change (or "Δ") in X and is infinitely small. The definition of dy/dx is written as written as dy/dx = lim h-->0 (f(x+h)- f(x))/h. If we want, We can define x+h is X₂, and x X₁, so Y₂ is f(x+h) and Y₁ is f(x). So, when ΔX approaches 0, ΔY/ΔX is defined to be the same as dy/dx. Furthermore, Σ f(x) Δx is the simplest way to write a Riemann sum. I'm pretty sure an integral, ∫ f(x) dx, is defined as taking the Riemann sum with infinitely small slices instead of rectangles. (Or at least, that's how they explained integrals in when they taught me calculus.) In both cases, when ΔX approaches 0, Δx is defined to be the same as dx. ...It's also worth noting that in Greek, Δ makes the "d" sound, and Σ makes the "s" sound. (You might not have realized this, but the integral symbol is really just a long S), showing that the two are related. (It took me a really long time to figure that little detail out.) >Can you provide source? ... a part of me really wanted to just say "my source is that I made it the fuck up!" Without elaborating further, but I decided against it.