Introduction¶
SymPy is an open source computer algebra system written in pure Python, licensed under the 3clause BSD license. It is built with a focus on extensibility and ease of use, through both interactive and programmatic applications. These characteristics have led SymPy to become a popular symbolic library for the scientific Python ecosystem. Ondřej Čertík started the SymPy project in 2006; on January 4, 2011, he passed the project leadership to Aaron Meurer. Currently, SymPy is developed on GitHub using a bazaar community model.
SymPy is itself used by many libraries and tools to support research within a variety of domains, such as Sage (pure mathematics), yt (astronomy and astrophysics), PyDy (multibody dynamics), and SfePy (finite elements).
Unlike many CASs, SymPy does not invent its own programming language. Python itself is used both for the internal implementation and end user interaction. By using the operator overloading functionality of Python, SymPy follows the embedded domain specific language paradigm proposed by Hudak. The exclusive usage of a single programming language makes it easier for people already familiar with that language to use or develop SymPy. Simultaneously, it enables developers to focus on mathematics, rather than language design.
Similarly to Live Editor from matlab, SymPy includes Python libraries in their workflow, whether they are in an interactive environment or as a programmatic part. SymPy does not have a builtin graphical user interface (GUI). However, SymPy exposes a rich interactive display system, and supports registering printers with Jupyter frontends, including the Notebook and Qt Console, which will render SymPy expressions using MathJax or \(\LaTeX\) L^{A}T_{E}X
We try to make the sources easily understandable, so you can look into the sources and read the doctests, it should be well documented and if you don't understand something, ask on the mailinglist.
You can find all the decisions archived in the issues, to see rationale for doing this and that.
Once you install SymPy, you will need to import all SymPy functions into the global Python namespace. From here on, all examples assume that the following statement has been executed:
>>> from sympy import *
Basic¶
To begin, we should make something about SymPy clear. SymPy is nothing more than a Python library, like NumPy, Django, or even modules in the Python standard library sys or re. What this means is that SymPy does not add anything to the Python language. Limitations that are inherent in the Python language are also inherent in SymPy. It also means that SymPy tries to use Python idioms whenever possible, making programming with SymPy easy for those already familiar with programming with Python. As a simple example, SymPy uses Python syntax to build expressions. Implicit multiplication (like 3x or 3 x) is not allowed in Python, and thus not allowed in SymPy. To multiply 3 and x, you must type 3*x with the *.
Symbols¶
One consequence of this fact is that SymPy can be used in any environment where Python is available. We just import it, like we would any other library:
>>> from sympy import *
This imports all the functions and classes from SymPy into our interactive Python session. Now, suppose we start to do a computation.
>>> x + 1
Traceback (most recent call last):
...
NameError: name 'x' is not defined
Oops! What happened here? We tried to use the variable x, but it tells us that x is not defined. In Python, variables have no meaning until they are defined. SymPy is no different. Unlike many symbolic manipulation systems you may have used, in SymPy, variables are not defined automatically. To define variables, we must use symbols.
>>> x = symbols('x')
>>> x + 1
x + 1
symbols takes a string of variable names separated by spaces or commas, and creates Symbols out of them. We can then assign these to variable names. Later, we will investigate some convenient ways we can work around this issue. For now, let us just define the most common variable names, x, y, and z, for use through the rest of this section
>>> x, y, z = symbols('x y z')
As a final note, we note that the name of a Symbol and the name of the variable it is assigned to need not have anything to do with one another.
>>> a, b = symbols('b a')
>>> a
b
>>> b
a
Here we have done the very confusing thing of assigning a Symbol with the name a to the variable b, and a Symbol of the name b to the variable a. Now the Python variable named a points to the SymPy Symbol named b, and visa versa. How confusing. We could have also done something like
>>> crazy = symbols('unrelated')
>>> crazy + 1
unrelated + 1
This also shows that Symbols can have names longer than one character if we want.
Usually, the best practice is to assign Symbols to Python variables of the same name, although there are exceptions: Symbol names can contain characters that are not allowed in Python variable names, or may just want to avoid typing long names by assigning Symbols with long names to single letter Python variables.
To avoid confusion, throughout this tutorial, Symbol names and Python variable names will always coincide. Furthermore, the word “Symbol” will refer to a SymPy Symbol and the word “variable” will refer to a Python variable.
Finally, let us be sure we understand the difference between SymPy Symbols and Python variables. Consider the following:
x = symbols('x')
expr = x + 1
x = 2
print(expr)
What do you think the output of this code will be? If you thought 3, you’re wrong. Let’s see what really happens
>>> x = symbols('x')
>>> expr = x + 1
>>> x = 2
>>> print(expr)
x + 1
Changing x to 2 had no effect on expr. This is because x = 2 changes the Python variable x to 2, but has no effect on the SymPy Symbol x, which was what we used in creating expr. When we created expr, the Python variable x was a Symbol. After we created, it, we changed the Python variable x to 2. But expr remains the same. This behavior is not unique to SymPy. All Python programs work this way: if a variable is changed, expressions that were already created with that variable do not change automatically. For example
>>> x = 'abc'
>>> expr = x + 'def'
>>> expr
'abcdef'
>>> x = 'ABC'
>>> expr
'abcdef'
In this example, if we want to know what expr is with the new value of x, we need to reevaluate the code that created expr, namely, expr = x + 1. This can be complicated if several lines created expr. One advantage of using a symbolic computation system like SymPy is that we can build a symbolic representation for expr, and then substitute x with values. The correct way to do this in SymPy is to use subs, which will be discussed in more detail later.
>>> x = symbols('x')
>>> expr = x + 1
>>> expr.subs(x, 2)
3
Equals signs¶
Another very important consequence of the fact that SymPy does not extend Python syntax is that = does not represent equality in SymPy. Rather it is Python variable assignment. This is hardcoded into the Python language, and SymPy makes no attempts to change that.
You may think, however, that ==, which is used for equality testing in Python, is used for SymPy as equality. This is not quite correct either. Let us see what happens when we use ==.
>>> x + 1 == 4
False
Instead of treating x + 1 == 4 symbolically, we just got False. In SymPy, == represents exact structural equality testing. This means that a == b means that we are asking if \(a = b\). We always get a bool as the result of ==. There is a separate object, called Eq, which can be used to create symbolic equalities
>>> Eq(x + 1, 4)
Eq(x + 1, 4)
There is one additional caveat about == as well. Suppose we want to know if \((x + 1)^2 = x^2 + 2x + 1\). We might try something like this:
>>> (x + 1)**2 == x**2 + 2*x + 1
False
We got False again. However, \((x + 1)^2\) does equal \(x^2 + 2x + 1\). What is going on here? Did we find a bug in SymPy, or is it just not powerful enough to recognize this basic algebraic fact?
Recall from above that == represents exact structural equality testing. “Exact” here means that two expressions will compare equal with == only if they are exactly equal structurally. Here, \((x + 1)^2\) and \(x^2 + 2x + 1\) are not the same symbolically. One is the power of an addition of two terms, and the other is the addition of three terms.
It turns out that when using SymPy as a library, having == test for exact symbolic equality is far more useful than having it represent symbolic equality, or having it test for mathematical equality. However, as a new user, you will probably care more about the latter two. We have already seen an alternative to representing equalities symbolically, Eq. To test if two things are equal, it is best to recall the basic fact that if \(a = b\), then \(a  b = 0\). Thus, the best way to check if \(a = b\) is to take \(a  b\) and simplify it, and see if it goes to 0. We will learn later that the function to do this is called simplify. This method is not infallible—in fact, it can be theoretically proven that it is impossible to determine if two symbolic expressions are identically equal in general—but for most common expressions, it works quite well.
>>> a = (x + 1)**2
>>> b = x**2 + 2*x + 1
>>> simplify(a  b)
0
>>> c = x**2  2*x + 1
>>> simplify(a  c)
4*x
There is also a method called equals that tests if two expressions are equal by evaluating them numerically at random points.
>>> a = cos(x)**2  sin(x)**2
>>> b = cos(2*x)
>>> a.equals(b)
True
Two Final Notes: ^ and /¶
You may have noticed that we have been using ** for exponentiation instead of the standard ^. That’s because SymPy follows Python’s conventions. In Python, ^ represents logical exclusive or. SymPy follows this convention:
>>> True ^ False
True
>>> True ^ True
False
>>> x^y
Xor(x, y)
Finally, a small technical discussion on how SymPy works is in order. When you type something like x + 1, the SymPy Symbol x is added to the Python int 1. Python’s operator rules then allow SymPy to tell Python that SymPy objects know how to be added to Python ints, and so 1 is automatically converted to the SymPy Integer object.
This sort of operator magic happens automatically behind the scenes, and you rarely need to even know that it is happening. However, there is one exception. Whenever you combine a SymPy object and a SymPy object, or a SymPy object and a Python object, you get a SymPy object, but whenever you combine two Python objects, SymPy never comes into play, and so you get a Python object.
>>> type(Integer(1) + 1)
<class 'sympy.core.numbers.Integer'>
>>> type(1 + 1)
<... 'int'>
Note
On running the example above in SymPy Live, (1+1) is wrapped by Integer, so it does not show the correct output.
This is usually not a big deal. Python ints work much the same as SymPy Integers, but there is one important exception: division. In SymPy, the division of two Integers gives a Rational:
>>> Integer(1)/Integer(3)
1/3
>>> type(Integer(1)/Integer(3))
<class 'sympy.core.numbers.Rational'>
But in Python / represents either integer division or floating point division, depending on whether you are in Python 2 or Python 3, and depending on whether or not you have run from __future__ import division:
>>> from __future__ import division
>>> 1/2
0.5
Note
On running the example above in SymPy Live, (1/2) is wrapped by Integer, so it does not show the correct output.
To avoid this, we can construct the rational object explicitly
>>> Rational(1, 2)
1/2
This problem also comes up whenever we have a larger symbolic expression with int/int in it. For example:
>>> x + 1/2
x + 0.5
Note
On running the example above in SymPy Live, (1/2) is wrapped by Integer, so it does not show the correct output.
This happens because Python first evaluates 1/2 into 0.5, and then that is cast into a SymPy type when it is added to x. Again, we can get around this by explicitly creating a Rational:
>>> x + Rational(1, 2)
x + 1/2
Substitution¶
>>> from sympy import *
>>> x, y, z = symbols("x y z")
One of the most common things you might want to do with a mathematical expression is substitution. Substitution replaces all instances of something in an expression with something else. It is done using the subs method. For example
>>> expr = cos(x) + 1
>>> expr.subs(x, y)
cos(y) + 1
Substitution is usually done for one of two reasons:
Evaluating an expression at a point. For example, if our expression is cos(x) + 1 and we want to evaluate it at the point x = 0, so that we get cos(0) + 1, which is 2.
>>> expr.subs(x, 0) 2
Replacing a subexpression with another subexpression. There are two reasons we might want to do this. The first is if we are trying to build an expression that has some symmetry, such as \(x^{x^{x^x}}\). To build this, we might start with x**y, and replace y with x**y. We would then get x**(x**y). If we replaced y in this new expression with x**x, we would get x**(x**(x**x)), the desired expression.
>>> expr = x**y >>> expr x**y >>> expr = expr.subs(y, x**y) >>> expr x**(x**y) >>> expr = expr.subs(y, x**x) >>> expr x**(x**(x**x))
The second is if we want to perform a very controlled simplification, or perhaps a simplification that SymPy is otherwise unable to do. For example, say we have \(\sin(2x) + \cos(2x)\), and we want to replace \(\sin(2x)\) with \(2\sin(x)\cos(x)\). As we will learn later, the function expand_trig does this. However, this function will also expand \(\cos(2x)\), which we may not want. While there are ways to perform such precise simplification, and we will learn some of them in the advanced expression manipulation section, an easy way is to just replace \(\sin(2x)\) with \(2\sin(x)\cos(x)\).
>>> expr = sin(2*x) + cos(2*x) >>> expand_trig(expr) 2*sin(x)*cos(x) + 2*cos(x)**2  1 >>> expr.subs(sin(2*x), 2*sin(x)*cos(x)) 2*sin(x)*cos(x) + cos(2*x)
There are two important things to note about subs. First, it returns a new expression. SymPy objects are immutable. That means that subs does not modify it inplace. For example
>>> expr = cos(x)
>>> expr.subs(x, 0)
1
>>> expr
cos(x)
>>> x
x
Here, we see that performing expr.subs(x, 0) leaves expr unchanged. In fact, since SymPy expressions are immutable, no function will change them inplace. All functions will return new expressions.
To perform multiple substitutions at once, pass a list of (old, new) pairs to subs.
>>> expr = x**3 + 4*x*y  z
>>> expr.subs([(x, 2), (y, 4), (z, 0)])
40
It is often useful to combine this with a list comprehension to do a large set of similar replacements all at once. For example, say we had \(x^4  4x^3 + 4x^2  2x + 3\) and we wanted to replace all instances of \(x\) that have an even power with \(y\), to get \(y^4  4x^3 + 4y^2  2x + 3\).
>>> expr = x**4  4*x**3 + 4*x**2  2*x + 3
>>> replacements = [(x**i, y**i) for i in range(5) if i % 2 == 0]
>>> expr.subs(replacements)
4*x**3  2*x + y**4 + 4*y**2 + 3
Converting Strings to SymPy Expressions¶
The sympify function (that’s sympify, not to be confused with simplify) can be used to convert strings into SymPy expressions.
For example
>>> str_expr = "x**2 + 3*x  1/2"
>>> expr = sympify(str_expr)
>>> expr
x**2 + 3*x  1/2
>>> expr.subs(x, 2)
19/2
Warning
sympify uses eval. Don’t use it on unsanitized input.
evalf¶
To evaluate a numerical expression into a floating point number, use evalf.
>>> expr = sqrt(8)
>>> expr.evalf()
2.82842712474619
SymPy can evaluate floating point expressions to arbitrary precision. By default, 15 digits of precision are used, but you can pass any number as the argument to evalf. Let’s compute the first 100 digits of \(\pi\).
>>> pi.evalf(100)
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068
To numerically evaluate an expression with a Symbol at a point, we might use subs followed by evalf, but it is more efficient and numerically stable to pass the substitution to evalf using the subs flag, which takes a dictionary of Symbol: point pairs.
>>> expr = cos(2*x)
>>> expr.evalf(subs={x: 2.4})
0.0874989834394464
Sometimes there are roundoff errors smaller than the desired precision that remain after an expression is evaluated. Such numbers can be removed at the user’s discretion by setting the chop flag to True.
>>> one = cos(1)**2 + sin(1)**2
>>> (one  1).evalf()
0.e124
>>> (one  1).evalf(chop=True)
0
lambdify¶
subs and evalf are good if you want to do simple evaluation, but if you intend to evaluate an expression at many points, there are more efficient ways. For example, if you wanted to evaluate an expression at a thousand points, using SymPy would be far slower than it needs to be, especially if you only care about machine precision. Instead, you should use libraries like NumPy and SciPy.
The easiest way to convert a SymPy expression to an expression that can be numerically evaluated is to use the lambdify function. lambdify acts like a lambda function, except it converts the SymPy names to the names of the given numerical library, usually NumPy. For example
>>> import numpy
>>> a = numpy.arange(10)
>>> expr = sin(x)
>>> f = lambdify(x, expr, "numpy")
>>> f(a)
[ 0. 0.84147098 0.90929743 0.14112001 0.7568025 0.95892427
0.2794155 0.6569866 0.98935825 0.41211849]
To use lambdify with numerical libraries that it does not know about, pass a dictionary of sympy_name:numerical_function pairs. For example
>>> def mysin(x):
... """
... My sine. Note that this is only accurate for small x.
... """
... return x
>>> f = lambdify(x, expr, {"sin":mysin})
>>> f(0.1)
0.1
Simplification¶
To make this document easier to read, we are going to enable pretty printing.
>>> from sympy import *
>>> x, y, z = symbols('x y z')
>>> init_printing(use_unicode=True)
simplify¶
Now let’s jump in and do some interesting mathematics. One of the most useful features of a symbolic manipulation system is the ability to simplify mathematical expressions. SymPy has dozens of functions to perform various kinds of simplification. There is also one general function called simplify() that attempts to apply all of these functions in an intelligent way to arrive at the simplest form of an expression. Here are some examples
>>> simplify(sin(x)**2 + cos(x)**2)
1
>>> simplify((x**3 + x**2  x  1)/(x**2 + 2*x + 1))
x  1
>>> simplify(gamma(x)/gamma(x  2))
(x  2) ⋅ (x  1)
Here, gamma(x) is \(\Gamma(x)\), the gamma function. We see that simplify() is capable of handling a large class of expressions.
But simplify() has a pitfall. It just applies all the major simplification operations in SymPy, and uses heuristics to determine the simplest result. But “simplest” is not a welldefined term. For example, say we wanted to “simplify” \(x^2 + 2x + 1\) into \( (x + 1)^2\):
>>> simplify(x**2 + 2*x + 1)
2
x + 2⋅x + 1
We did not get what we want. There is a function to perform this simplification, called factor(), which will be discussed below.
Another pitfall to simplify() is that it can be unnecessarily slow, since it tries many kinds of simplifications before picking the best one. If you already know exactly what kind of simplification you are after, it is better to apply the specific simplification function(s) that apply those simplifications.
Applying specific simplification functions instead of simplify() also has the advantage that specific functions have certain guarantees about the form of their output. These will be discussed with each function below. For example, factor(), when called on a polynomial with rational coefficients, is guaranteed to factor the polynomial into irreducible factors. simplify() has no guarantees. It is entirely heuristical, and, as we saw above, it may even miss a possible type of simplification that SymPy is capable of doing.
simplify() is best when used interactively, when you just want to whittle down an expression to a simpler form. You may then choose to apply specific functions once you see what simplify() returns, to get a more precise result. It is also useful when you have no idea what form an expression will take, and you need a catchall function to simplify it.
Polynomial/Rational Function Simplification¶
expand¶
expand() is one of the most common simplification functions in SymPy. Although it has a lot of scopes, for now, we will consider its function in expanding polynomial expressions. For example:
>>> expand((x + 1)**2)
2
x + 2⋅x + 1
>>> expand((x + 2)*(x  3))
2
x  x  6
Given a polynomial, expand() will put it into a canonical form of a sum of monomials.
expand() may not sound like a simplification function. After all, by its very name, it makes expressions bigger, not smaller. Usually this is the case, but often an expression will become smaller upon calling expand() on it due to cancellation.
>>> expand((x + 1)*(x  2)  (x  1)*x)
2
factor¶
factor() takes a polynomial and factors it into irreducible factors over the rational numbers. For example:
>>> factor(x**3  x**2 + x  1)
⎛ 2 ⎞
(x  1)⋅⎝x + 1⎠
>>> factor(x**2*z + 4*x*y*z + 4*y**2*z)
2
z⋅(x + 2⋅y)
For polynomials, factor() is the opposite of expand(). factor() uses a complete multivariate factorization algorithm over the rational numbers, which means that each of the factors returned by factor() is guaranteed to be irreducible.
If you are interested in the factors themselves, factor_list returns a more structured output.
>>> factor_list(x**2*z + 4*x*y*z + 4*y**2*z)
(1, [(z, 1), (x + 2⋅y, 2)])
Note that the input to factor and expand need not be polynomials in the strict sense. They will intelligently factor or expand any kind of expression (though note that the factors may not be irreducible if the input is no longer a polynomial over the rationals).
>>> expand((cos(x) + sin(x))**2)
2 2
sin (x) + 2⋅sin(x)⋅cos(x) + cos (x)
>>> factor(cos(x)**2 + 2*cos(x)*sin(x) + sin(x)**2)
2
(sin(x) + cos(x))
collect¶
collect() collects common powers of a term in an expression. For example
>>> expr = x*y + x  3 + 2*x**2  z*x**2 + x**3
>>> expr
3 2 2
x  x ⋅z + 2⋅x + x⋅y + x  3
>>> collected_expr = collect(expr, x)
>>> collected_expr
3 2
x + x ⋅(z + 2) + x⋅(y + 1)  3
collect() is particularly useful in conjunction with the .coeff() method. expr.coeff(x, n) gives the coefficient of x**n in expr:
>>> collected_expr.coeff(x, 2)
z + 2
cancel¶
cancel() will take any rational function and put it into the standard canonical form, \(\frac{p}{q}\), where \(p\) and \(q\) are expanded polynomials with no common factors, and the leading coefficients of \(p\) and \(q\) do not have denominators (i.e., are integers).
>>> cancel((x**2 + 2*x + 1)/(x**2 + x))
x + 1
─────
x
>>> expr = 1/x + (3*x/2  2)/(x  4)
>>> expr
3⋅x
───  2
2 1
─────── + ─
x  4 x
>>> cancel(expr)
2
3⋅x  2⋅x  8
──────────────
2
2⋅x  8⋅x
>>> expr = (x*y**2  2*x*y*z + x*z**2 + y**2  2*y*z + z**2)/(x**2  1)
>>> expr
2 2 2 2
x⋅y  2⋅x⋅y⋅z + x⋅z + y  2⋅y⋅z + z
───────────────────────────────────────
2
x  1
>>> cancel(expr)
2 2
y  2⋅y⋅z + z
───────────────
x  1
Note that since factor() will completely factorize both the numerator and the denominator of an expression, it can also be used to do the same thing:
>>> factor(expr)
2
(y  z)
────────
x  1
However, if you are only interested in making sure that the expression is in canceled form, cancel() is more efficient than factor().
apart¶
apart() performs a partial fraction decomposition on a rational function.
>>> expr = (4*x**3 + 21*x**2 + 10*x + 12)/(x**4 + 5*x**3 + 5*x**2 + 4*x)
>>> expr
3 2
4⋅x + 21⋅x + 10⋅x + 12
────────────────────────
4 3 2
x + 5⋅x + 5⋅x + 4⋅x
>>> apart(expr)
2⋅x  1 1 3
──────────  ───── + ─
2 x + 4 x
x + x + 1
Trigonometric Simplification¶
Note
SymPy follows Python’s naming conventions for inverse trigonometric functions, which is to append an a to the front of the function’s name. For example, the inverse cosine, or arc cosine, is called acos().
>>> acos(x)
acos(x)
>>> cos(acos(x))
x
>>> asin(1)
π
─
2
trigsimp¶
To simplify expressions using trigonometric identities, use trigsimp().
>>> trigsimp(sin(x)**2 + cos(x)**2)
1
>>> trigsimp(sin(x)**4  2*cos(x)**2*sin(x)**2 + cos(x)**4)
cos(4⋅x) 1
──────── + ─
2 2
>>> trigsimp(sin(x)*tan(x)/sec(x))
2
sin (x)
trigsimp() also works with hyperbolic trig functions.
>>> trigsimp(cosh(x)**2 + sinh(x)**2)
cosh(2⋅x)
>>> trigsimp(sinh(x)/tanh(x))
cosh(x)
Much like simplify(), trigsimp() applies various trigonometric identities to the input expression, and then uses a heuristic to return the “best” one.
expand_trig¶
To expand trigonometric functions, that is, apply the sum or double angle identities, use expand_trig().
>>> expand_trig(sin(x + y))
sin(x)⋅cos(y) + sin(y)⋅cos(x)
>>> expand_trig(tan(2*x))
2⋅tan(x)
─────────────
2
 tan (x) + 1
Because expand_trig() tends to make trigonometric expressions larger, and trigsimp() tends to make them smaller, these identities can be applied in reverse using trigsimp()
>>> trigsimp(sin(x)*cos(y) + sin(y)*cos(x))
sin(x + y)
Powers¶
Before we introduce the power simplification functions, a mathematical discussion on the identities held by powers is in order. There are three kinds of identities satisfied by exponents
 \(x^ax^b = x^{a + b}\)
 \(x^ay^a = (xy)^a\)
 \((x^a)^b = x^{ab}\)
Identity 1 is always true.
Identity 2 is not always true. For example, if \(x = y = 1\) and \(a = \frac{1}{2}\), then \(x^ay^a = \sqrt{1}\sqrt{1} = i\cdot i = 1\), whereas \((xy)^a = \sqrt{1\cdot1} = \sqrt{1} = 1\). However, identity 2 is true at least if \(x\) and \(y\) are nonnegative and \(a\) is real (it may also be true under other conditions as well). A common consequence of the failure of identity 2 is that \(\sqrt{x}\sqrt{y} \neq \sqrt{xy}\).
Identity 3 is not always true. For example, if \(x = 1\), \(a = 2\), and \(b = \frac{1}{2}\), then \((x^a)^b = {\left ((1)^2\right )}^{1/2} = \sqrt{1} = 1\) and \(x^{ab} = (1)^{2\cdot1/2} = (1)^1 = 1\). However, identity 3 is true when \(b\) is an integer (again, it may also hold in other cases as well). Two common consequences of the failure of identity 3 are that \(\sqrt{x^2}\neq x\) and that \(\sqrt{\frac{1}{x}} \neq \frac{1}{\sqrt{x}}\).
To summarize
Identity  Sufficient conditions to hold  Counterexample when conditions are not met  Important consequences 


Always true  None  None 

\(x, y \geq 0\) and \(a \in \mathbb{R}\)  \((1)^{1/2}(1)^{1/2} \neq (1\cdot1)^{1/2}\)  \(\sqrt{x}\sqrt{y} \neq \sqrt{xy}\) in general 

\(b \in \mathbb{Z}\)  \({\left((1)^2\right )}^{1/2} \neq (1)^{2\cdot1/2}\)  \(\sqrt{x^2}\neq x\) and \(\sqrt{\frac{1}{x}}\neq\frac{1}{\sqrt{x}}\) in general 
This is important to remember, because by default, SymPy will not perform simplifications if they are not true in general.
In order to make SymPy perform simplifications involving identities that are only true under certain assumptions, we need to put assumptions on our Symbols. We will undertake a full discussion of the assumptions system later, but for now, all we need to know are the following.
By default, SymPy Symbols are assumed to be complex (elements of \(\mathbb{C}\)). That is, a simplification will not be applied to an expression with a given Symbol unless it holds for all complex numbers.
Symbols can be given different assumptions by passing the assumption to symbols(). For the rest of this section, we will be assuming that x and y are positive, and that a and b are real. We will leave z, t, and c as arbitrary complex Symbols to demonstrate what happens in that case.
>>> x, y = symbols('x y', positive=True) >>> a, b = symbols('a b', real=True) >>> z, t, c = symbols('z t c')
Note
In SymPy, sqrt(x) is just a shortcut to x**Rational(1, 2). They are exactly the same object.
>>> sqrt(x) == x**Rational(1, 2)
True
powsimp¶
powsimp() applies identities 1 and 2 from above, from left to right.
>>> powsimp(x**a*x**b)
a + b
x
>>> powsimp(x**a*y**a)
a
(x⋅y)
Notice that powsimp() refuses to do the simplification if it is not valid.
>>> powsimp(t**c*z**c)
c c
t ⋅z
If you know that you want to apply this simplification, but you don’t want to mess with assumptions, you can pass the force=True flag. This will force the simplification to take place, regardless of assumptions.
>>> powsimp(t**c*z**c, force=True)
c
(t⋅z)
Note that in some instances, in particular, when the exponents are integers or rational numbers, and identity 2 holds, it will be applied automatically
>>> (z*t)**2
2 2
t ⋅z
>>> sqrt(x*y)
√x⋅√y
This means that it will be impossible to undo this identity with powsimp(), because even if powsimp() were to put the bases together, they would be automatically split apart again.
>>> powsimp(z**2*t**2)
2 2
t ⋅z
>>> powsimp(sqrt(x)*sqrt(y))
√x⋅√y
expand_power_exp / expand_power_base¶
expand_power_exp() and expand_power_base() apply identities 1 and 2 from right to left, respectively.
>>> expand_power_exp(x**(a + b))
a b
x ⋅x
>>> expand_power_base((x*y)**a)
a a
x ⋅y
As with powsimp(), identity 2 is not applied if it is not valid.
>>> expand_power_base((z*t)**c)
c
(t⋅z)
And as with powsimp(), you can force the expansion to happen without fiddling with assumptions by using force=True.
>>> expand_power_base((z*t)**c, force=True)
c c
t ⋅z
As with identity 2, identity 1 is applied automatically if the power is a number, and hence cannot be undone with expand_power_exp().
>>> x**2*x**3
5
x
>>> expand_power_exp(x**5)
5
x
powdenest¶
powdenest() applies identity 3, from left to right.
>>> powdenest((x**a)**b)
a⋅b
x
As before, the identity is not applied if it is not true under the given assumptions.
>>> powdenest((z**a)**b)
b
⎛ a⎞
⎝z ⎠
And as before, this can be manually overridden with force=True.
>>> powdenest((z**a)**b, force=True)
a⋅b
z
Exponentials and logarithms¶
Note
In SymPy, as in Python and most programming languages, log is the natural logarithm, also known as ln. SymPy automatically provides an alias ln = log in case you forget this.
>>> ln(x)
log(x)
Logarithms have similar issues as powers. There are two main identities
 \(\log{(xy)} = \log{(x)} + \log{(y)}\)
 \(\log{(x^n)} = n\log{(x)}\)
Neither identity is true for arbitrary complex \(x\) and \(y\), due to the branch cut in the complex plane for the complex logarithm. However, sufficient conditions for the identities to hold are if \(x\) and \(y\) are positive and \(n\) is real.
>>> x, y = symbols('x y', positive=True)
>>> n = symbols('n', real=True)
As before, z and t will be Symbols with no additional assumptions.
Note that the identity \(\log{\left (\frac{x}{y}\right )} = \log(x)  \log(y)\) is a special case of identities 1 and 2 by \(\log{\left (\frac{x}{y}\right )} =\) \(\log{\left (x\cdot\frac{1}{y}\right )} =\) \(\log(x) + \log{\left( y^{1}\right )} =\) \(\log(x)  \log(y)\), and thus it also holds if \(x\) and \(y\) are positive, but may not hold in general.
We also see that \(\log{\left( e^x \right)} = x\) comes from \(\log{\left ( e^x \right)} = x\log(e) = x\), and thus holds when \(x\) is real (and it can be verified that it does not hold in general for arbitrary complex \(x\), for example, \(\log{\left (e^{x + 2\pi i}\right)} = \log{\left (e^x\right )} = x \neq x + 2\pi i\)).
expand_log¶
To apply identities 1 and 2 from left to right, use expand_log(). As always, the identities will not be applied unless they are valid.
>>> expand_log(log(x*y))
log(x) + log(y)
>>> expand_log(log(x/y))
log(x)  log(y)
>>> expand_log(log(x**2))
2⋅log(x)
>>> expand_log(log(x**n))
n⋅log(x)
>>> expand_log(log(z*t))
log(t⋅z)
As with powsimp() and powdenest(), expand_log() has a force option that can be used to ignore assumptions.
>>> expand_log(log(z**2))
⎛ 2⎞
log⎝z ⎠
>>> expand_log(log(z**2), force=True)
2⋅log(z)
logcombine¶
To apply identities 1 and 2 from right to left, use logcombine().
>>> logcombine(log(x) + log(y))
log(x⋅y)
>>> logcombine(n*log(x))
⎛ n⎞
log⎝x ⎠
>>> logcombine(n*log(z))
n⋅log(z)
logcombine() also has a force option that can be used to ignore assumptions.
>>> logcombine(n*log(z), force=True)
⎛ n⎞
log⎝z ⎠
Special Functions¶
SymPy implements dozens of special functions, ranging from functions in combinatorics to mathematical physics.
An extensive list of the special functions included with SymPy and their documentation is at the Functions Module page.
For the purposes of this tutorial, let’s introduce a few special functions in SymPy.
Let’s define x, y, and z as regular, complex Symbols, removing any assumptions we put on them in the previous section. We will also define k, m, and n.
>>> x, y, z = symbols('x y z')
>>> k, m, n = symbols('k m n')
The factorial function is factorial. factorial(n) represents \(n!= 1\cdot2\cdots(n  1)\cdot n\). \(n!\) represents the number of permutations of \(n\) distinct items.
>>> factorial(n)
n!
The binomial coefficient function is binomial. binomial(n, k) represents \(\binom{n}{k}\), the number of ways to choose \(k\) items from a set of \(n\) distinct items. It is also often written as \(nCk\), and is pronounced “\(n\) choose \(k\)”.
>>> binomial(n, k)
⎛n⎞
⎜ ⎟
⎝k⎠
The factorial function is closely related to the gamma function, gamma. gamma(z) represents \(\Gamma(z) = \int_0^\infty t^{z  1}e^{t}\,dt\), which for positive integer \(z\) is the same as \((z  1)!\).
>>> gamma(z)
Γ(z)
The generalized hypergeometric function is hyper. hyper([a_1, ..., a_p], [b_1, ..., b_q], z) represents \({}_pF_q\left(\begin{matrix} a_1, \dots, a_p \\ b_1, \dots, b_q \end{matrix} \middle z \right)\). The most common case is \({}_2F_1\), which is often referred to as the ordinary hypergeometric function.
>>> hyper([1, 2], [3], z)
┌─ ⎛1, 2 │ ⎞
├─ ⎜ │ z⎟
2╵ 1 ⎝ 3 │ ⎠
rewrite¶
A common way to deal with special functions is to rewrite them in terms of one another. This works for any function in SymPy, not just special functions. To rewrite an expression in terms of a function, use expr.rewrite(function). For example,
>>> tan(x).rewrite(sin)
2
2⋅sin (x)
─────────
sin(2⋅x)
>>> factorial(x).rewrite(gamma)
Γ(x + 1)
For some tips on applying more targeted rewriting, see the Advanced Expression Manipulation section.
expand_func¶
To expand special functions in terms of some identities, use expand_func(). For example
>>> expand_func(gamma(x + 3))
x⋅(x + 1)⋅(x + 2)⋅Γ(x)
hyperexpand¶
To rewrite hyper in terms of more standard functions, use hyperexpand().
>>> hyperexpand(hyper([1, 1], [2], z))
log(z + 1)
─────────────
z
hyperexpand() also works on the more general Meijer Gfunction (see its documentation for more information).
>>> expr = meijerg([[1],[1]], [[1],[]], z)
>>> expr
╭─╮1, 1 ⎛1 1 │ ⎞
│╶┐ ⎜ │ z⎟
╰─╯2, 1 ⎝1 │ ⎠
>>> hyperexpand(expr)
1
─
z
ℯ
combsimp¶
To simplify combinatorial expressions, use combsimp().
>>> combsimp(factorial(n)/factorial(n  3))
n⋅(n  2)⋅(n  1)
>>> combsimp(binomial(n+1, k+1)/binomial(n, k))
n + 1
─────
k + 1
combsimp() also simplifies expressions with gamma.
>>> combsimp(gamma(x)*gamma(1  x))
π
────────
sin(π⋅x)
Example: Continued Fractions¶
Let’s use SymPy to explore continued fractions. A continued fraction is an expression of the form
where \(a_0, \ldots, a_n\) are integers, and \(a_1, \ldots, a_n\) are positive. A continued fraction can also be infinite, but infinite objects are more difficult to represent in computers, so we will only examine the finite case here.
A continued fraction of the above form is often represented as a list \([a_0; a_1, \ldots, a_n]\). Let’s write a simple function that converts such a list to its continued fraction form. The easiest way to construct a continued fraction from a list is to work backwards. Note that despite the apparent symmetry of the definition, the first element, \(a_0\), must usually be handled differently from the rest.
>>> def list_to_frac(l):
... expr = Integer(0)
... for i in reversed(l[1:]):
... expr += i
... expr = 1/expr
... return l[0] + expr
>>> list_to_frac([x, y, z])
1
x + ─────
1
y + ─
z
We use Integer(0) in list_to_frac so that the result will always be a SymPy object, even if we only pass in Python ints.
>>> list_to_frac([1, 2, 3, 4])
43
──
30
Every finite continued fraction is a rational number, but we are interested in symbolics here, so let’s create a symbolic continued fraction. The symbols() function that we have been using has a shortcut to create numbered symbols. symbols('a0:5') will create the symbols a0, a1, ..., a5.
>>> syms = symbols('a0:5')
>>> syms
(a₀, a₁, a₂, a₃, a₄)
>>> a0, a1, a2, a3, a4 = syms
>>> frac = list_to_frac(syms)
>>> frac
1
a₀ + ─────────────────
1
a₁ + ────────────
1
a₂ + ───────
1
a₃ + ──
a₄
This form is useful for understanding continued fractions, but lets put it into standard rational function form using cancel().
>>> frac = cancel(frac)
>>> frac
a₀⋅a₁⋅a₂⋅a₃⋅a₄ + a₀⋅a₁⋅a₂ + a₀⋅a₁⋅a₄ + a₀⋅a₃⋅a₄ + a₀ + a₂⋅a₃⋅a₄ + a₂ + a₄
─────────────────────────────────────────────────────────────────────────
a₁⋅a₂⋅a₃⋅a₄ + a₁⋅a₂ + a₁⋅a₄ + a₃⋅a₄ + 1
Now suppose we were given frac in the above canceled form. In fact, we might be given the fraction in any form, but we can always put it into the above canonical form with cancel(). Suppose that we knew that it could be rewritten as a continued fraction. How could we do this with SymPy? A continued fraction is recursively \(c + \frac{1}{f}\), where \(c\) is an integer and \(f\) is a (smaller) continued fraction. If we could write the expression in this form, we could pull out each \(c\) recursively and add it to a list. We could then get a continued fraction with our list_to_frac() function.
The key observation here is that we can convert an expression to the form \(c + \frac{1}{f}\) by doing a partial fraction decomposition with respect to \(c\). This is because \(f\) does not contain \(c\). This means we need to use the apart() function. We use apart() to pull the term out, then subtract it from the expression, and take the reciprocal to get the \(f\) part.
>>> l = []
>>> frac = apart(frac, a0)
>>> frac
a₂⋅a₃⋅a₄ + a₂ + a₄
a₀ + ───────────────────────────────────────
a₁⋅a₂⋅a₃⋅a₄ + a₁⋅a₂ + a₁⋅a₄ + a₃⋅a₄ + 1
>>> l.append(a0)
>>> frac = 1/(frac  a0)
>>> frac
a₁⋅a₂⋅a₃⋅a₄ + a₁⋅a₂ + a₁⋅a₄ + a₃⋅a₄ + 1
───────────────────────────────────────
a₂⋅a₃⋅a₄ + a₂ + a₄
Now we repeat this process
>>> frac = apart(frac, a1)
>>> frac
a₃⋅a₄ + 1
a₁ + ──────────────────
a₂⋅a₃⋅a₄ + a₂ + a₄
>>> l.append(a1)
>>> frac = 1/(frac  a1)
>>> frac = apart(frac, a2)
>>> frac
a₄
a₂ + ─────────
a₃⋅a₄ + 1
>>> l.append(a2)
>>> frac = 1/(frac  a2)
>>> frac = apart(frac, a3)
>>> frac
1
a₃ + ──
a₄
>>> l.append(a3)
>>> frac = 1/(frac  a3)
>>> frac = apart(frac, a4)
>>> frac
a₄
>>> l.append(a4)
>>> list_to_frac(l)
1
a₀ + ─────────────────
1
a₁ + ────────────
1
a₂ + ───────
1
a₃ + ──
a₄
Of course, this exercise seems pointless, because we already know that our frac is list_to_frac([a0, a1, a2, a3, a4]). So try the following exercise. Take a list of symbols and randomize them, and create the canceled continued fraction, and see if you can reproduce the original list. For example
>>> import random
>>> l = list(symbols('a0:5'))
>>> random.shuffle(l)
>>> orig_frac = frac = cancel(list_to_frac(l))
>>> del l
Click on “Run code block in SymPy Live” on the definition of list_to_frac) above, and then on the above example, and try to reproduce l from frac. I have deleted l at the end to remove the temptation for peeking (you can check your answer at the end by calling cancel(list_to_frac(l)) on the list that you generate at the end, and comparing it to orig_frac.
See if you can think of a way to figure out what symbol to pass to apart() at each stage (hint: think of what happens to \(a_0\) in the formula \(a_0 + \frac{1}{a_1 + \cdots}\) when it is canceled).
Calculus¶
This section covers how to do basic calculus tasks such as derivatives, integrals, limits, and series expansions in SymPy. If you are not familiar with the math of any part of this section, you may safely skip it.
>>> from sympy import *
>>> x, y, z = symbols('x y z')
>>> init_printing(use_unicode=True)
Derivatives¶
To take derivatives, use the diff function.
>>> diff(cos(x), x)
sin(x)
>>> diff(exp(x**2), x)
⎛ 2⎞
⎝x ⎠
2⋅x⋅ℯ
diff can take multiple derivatives at once. To take multiple derivatives, pass the variable as many times as you wish to differentiate, or pass a number after the variable. For example, both of the following find the third derivative of \(x^4\).
>>> diff(x**4, x, x, x)
24⋅x
>>> diff(x**4, x, 3)
24⋅x
You can also take derivatives with respect to many variables at once. Just pass each derivative in order, using the same syntax as for single variable derivatives. For example, each of the following will compute \(\frac{\partial^7}{\partial x\partial y^2\partial z^4} e^{x y z}\).
>>> expr = exp(x*y*z)
>>> diff(expr, x, y, y, z, z, z, z)
3 2 ⎛ 3 3 3 2 2 2 ⎞ x⋅y⋅z
x ⋅y ⋅⎝x ⋅y ⋅z + 14⋅x ⋅y ⋅z + 52⋅x⋅y⋅z + 48⎠⋅ℯ
>>> diff(expr, x, y, 2, z, 4)
3 2 ⎛ 3 3 3 2 2 2 ⎞ x⋅y⋅z
x ⋅y ⋅⎝x ⋅y ⋅z + 14⋅x ⋅y ⋅z + 52⋅x⋅y⋅z + 48⎠⋅ℯ
>>> diff(expr, x, y, y, z, 4)
3 2 ⎛ 3 3 3 2 2 2 ⎞ x⋅y⋅z
x ⋅y ⋅⎝x ⋅y ⋅z + 14⋅x ⋅y ⋅z + 52⋅x⋅y⋅z + 48⎠⋅ℯ
diff can also be called as a method. The two ways of calling diff are exactly the same, and are provided only for convenience.
>>> expr.diff(x, y, y, z, 4)
3 2 ⎛ 3 3 3 2 2 2 ⎞ x⋅y⋅z
x ⋅y ⋅⎝x ⋅y ⋅z + 14⋅x ⋅y ⋅z + 52⋅x⋅y⋅z + 48⎠⋅ℯ
To create an unevaluated derivative, use the Derivative class. It has the same syntax as diff.
>>> deriv = Derivative(expr, x, y, y, z, 4)
>>> deriv
7
∂ ⎛ x⋅y⋅z⎞
──────────⎝ℯ ⎠
4 2
∂z ∂y ∂x
To evaluate an unevaluated derivative, use the doit method.
>>> deriv.doit()
3 2 ⎛ 3 3 3 2 2 2 ⎞ x⋅y⋅z
x ⋅y ⋅⎝x ⋅y ⋅z + 14⋅x ⋅y ⋅z + 52⋅x⋅y⋅z + 48⎠⋅ℯ
These unevaluated objects are useful for delaying the evaluation of the derivative, or for printing purposes. They are also used when SymPy does not know how to compute the derivative of an expression (for example, if it contains an undefined function, which are described in the Solving Differential Equations section).
Integrals¶
To compute an integral, use the integrate function. There are two kinds of integrals, definite and indefinite. To compute an indefinite integral, that is, an antiderivative, or primitive, just pass the variable after the expression.
>>> integrate(cos(x), x)
sin(x)
Note that SymPy does not include the constant of integration. If you want it, you can add one yourself, or rephrase your problem as a differential equation and use dsolve to solve it, which does add the constant (see Solving Differential Equations).
To compute a definite integral, pass the argument (integration_variable, lower_limit, upper_limit). For example, to compute
we would do
>>> integrate(exp(x), (x, 0, oo))
1
As with indefinite integrals, you can pass multiple limit tuples to perform a multiple integral. For example, to compute
do
>>> integrate(exp(x**2  y**2), (x, oo, oo), (y, oo, oo))
π
If integrate is unable to compute an integral, it returns an unevaluated Integral object.
>>> expr = integrate(x**x, x)
>>> print(expr)
Integral(x**x, x)
>>> expr
⌠
⎮ x
⎮ x dx
⌡
As with Derivative, you can create an unevaluated integral using Integral. To later evaluate this integral, call doit.
>>> expr = Integral(log(x)**2, x)
>>> expr
⌠
⎮ 2
⎮ log (x) dx
⌡
>>> expr.doit()
2
x⋅log (x)  2⋅x⋅log(x) + 2⋅x
integrate uses powerful algorithms that are always improving to compute both definite and indefinite integrals, including heuristic pattern matching type algorithms, a partial implementation of the Risch algorithm, and an algorithm using Meijer Gfunctions that is useful for computing integrals in terms of special functions, especially definite integrals. Here is a sampling of some of the power of integrate.
>>> integ = Integral((x**4 + x**2*exp(x)  x**2  2*x*exp(x)  2*x 
... exp(x))*exp(x)/((x  1)**2*(x + 1)**2*(exp(x) + 1)), x)
>>> integ
⌠
⎮ ⎛ 4 2 x 2 x x⎞ x
⎮ ⎝x + x ⋅ℯ  x  2⋅x⋅ℯ  2⋅x  ℯ ⎠⋅ℯ
⎮ ──────────────────────────────────────── dx
⎮ 2 2 ⎛ x ⎞
⎮ (x  1) ⋅(x + 1) ⋅⎝ℯ + 1⎠
⌡
>>> integ.doit()
x
⎛ x ⎞ ℯ
log⎝ℯ + 1⎠ + ──────
2
x  1
>>> integ = Integral(sin(x**2), x)
>>> integ
⌠
⎮ ⎛ 2⎞
⎮ sin⎝x ⎠ dx
⌡
>>> integ.doit()
⎛√2⋅x⎞
3⋅√2⋅√π⋅fresnels⎜────⎟⋅Γ(3/4)
⎝ √π ⎠
─────────────────────────────
8⋅Γ(7/4)
>>> integ = Integral(x**y*exp(x), (x, 0, oo))
>>> integ
∞
⌠
⎮ y x
⎮ x ⋅ℯ dx
⌡
0
>>> integ.doit()
⎧ Γ(y + 1) for re(y) < 1
⎪
⎪∞
⎪⌠
⎨⎮ y x
⎪⎮ x ⋅ℯ dx otherwise
⎪⌡
⎪0
⎩
This last example returned a Piecewise expression because the integral does not converge unless \(\Re(y) > 1.\)
Limits¶
SymPy can compute symbolic limits with the limit function. The syntax to compute
is limit(f(x), x, x0).
>>> limit(sin(x)/x, x, 0)
1
limit should be used instead of subs whenever the point of evaluation is a singularity. Even though SymPy has objects to represent \(\infty\), using them for evaluation is not reliable because they do not keep track of things like rate of growth. Also, things like \(\infty  \infty\) and \(\frac{\infty}{\infty}\) return \(\mathrm{nan}\) (notanumber). For example
>>> expr = x**2/exp(x)
>>> expr.subs(x, oo)
nan
>>> limit(expr, x, oo)
0
Like Derivative and Integral, limit has an unevaluated counterpart, Limit. To evaluate it, use doit.
>>> expr = Limit((cos(x)  1)/x, x, 0)
>>> expr
cos(x)  1
lim ──────────
x─→0⁺ x
>>> expr.doit()
0
To evaluate a limit at one side only, pass '+' or '' as a third argument to limit. For example, to compute
do
>>> limit(1/x, x, 0, '+')
∞
As opposed to
>>> limit(1/x, x, 0, '')
∞
Series Expansion¶
SymPy can compute asymptotic series expansions of functions around a point. To compute the expansion of \(f(x)\) around the point \(x = x_0\) terms of order \(x^n\), use f(x).series(x, x0, n). x0 and n can be omitted, in which case the defaults x0=0 and n=6 will be used.
>>> expr = exp(sin(x))
>>> expr.series(x, 0, 4)
2
x ⎛ 4⎞
1 + x + ── + O⎝x ⎠
2
The \(O\left (x^4\right )\) term at the end represents the Landau order term at \(x=0\) (not to be confused with big O notation used in computer science, which generally represents the Landau order term at \(x=\infty\)). It means that all x terms with power greater than or equal to \(x^4\) are omitted. Order terms can be created and manipulated outside of series. They automatically absorb higher order terms.
>>> x + x**3 + x**6 + O(x**4)
3 ⎛ 4⎞
x + x + O⎝x ⎠
>>> x*O(1)
O(x)
If you do not want the order term, use the removeO method.
>>> expr.series(x, 0, 4).removeO()
2
x
── + x + 1
2
The O notation supports arbitrary limit points (other than 0):
>>> exp(x  6).series(x, x0=6)
2 3 4 5
(x  6) (x  6) (x  6) (x  6) ⎛ 6 ⎞
5 + ──────── + ──────── + ──────── + ──────── + x + O⎝(x  6) ; x → 6⎠
2 6 24 120
Finite differences¶
So far we have looked at expressions with analytical derivatives and primitive functions respectively. But what if we want to have an expression to estimate a derivative of a curve for which we lack a closed form representation, or for which we don’t know the functional values for yet. One approach would be to use a finite difference approach.
You can use the as_finite_diff method of on any Derivative instance to generate approximations to derivatives of arbitrary order:
>>> f = Function('f')
>>> dfdx = f(x).diff(x)
>>> as_finite_diff(dfdx)
f(x  1/2) + f(x + 1/2)
here the first order derivative was approximated around x using a minimum number of points (2 for 1st order derivative) evaluated equidistantly using a stepsize of 1. We can use arbitrary steps (possibly containing symbolic expressions):
>>> f = Function('f')
>>> d2fdx2 = f(x).diff(x, 2)
>>> h = Symbol('h')
>>> as_finite_diff(d2fdx2, [3*h,h,2*h])
f(3⋅h) f(h) 2⋅f(2⋅h)
───────  ───── + ────────
2 2 2
5⋅h 3⋅h 15⋅h
If you are just interested in evaluating the weights, you can do so manually:
>>> finite_diff_weights(2, [3, 1, 2], 0)[1][1]
[1/5, 1/3, 2/15]
note that we only need the last element in the last sublist returned from finite_diff_weights. The reason for this is that finite_diff_weights also generates weights for lower derivatives and using fewer points (see the documentation of finite_diff_weights for more details).
if using finite_diff_weights directly looks complicated and the as_finite_diff function operating on Derivative instances is not flexible enough, you can use apply_finite_diff which takes order, x_list, y_list and x0 as parameters:
>>> x_list = [3, 1, 2]
>>> y_list = symbols('a b c')
>>> apply_finite_diff(1, x_list, y_list, 0)
3⋅a b 2⋅c
 ───  ─ + ───
20 4 5
1. Part 1: Plotting
Part I. Plotting
Plotting functions
Implicit plot
Vertical and horizontal lines
Lables and texts
Arrows
Pollar plot
Direction fields
1. Part 2: First Order ODEs
Solving ODEs
Direction fields
Separable equations
Equations reducible to separable equations.
Exact equations
Integrating Factors
Linear and Bernoulli equations
Riccati equation
Existence and Uniqueness of solutions
Qualitative analysis
Applications
1. Part 3: Numerical Methods and Applications
Recurrences
Numerical solutions
a) Euler methods
b) Polynomial approximations
c) RungeKutta methods
d) Multistep methods
4) Numerov's method
Applications
1. Part 4: Second and Higher Order ODEs
Second order differential equations
Fundamental set of solutions. Wronskian
General solution
Reduction of order
Nonhomogeneous equations.
Lagrange's method
Method of undetermined coefficients
Operator methods (not sure yet)
Applications
1. Part 5: Series and Recurrences
First order recurrences
Second order recurrences
Generating functions
Series solutions for the first order equations
Series solutions for the second order equations
Generalized series solutions.
Bessel equation
Airy equation
Chebyshev equations
Legendre equation
Hermite equation
Laguerre equation
Applications
1. Part 6: Laplace Transform
Laplace transform
Heaviside function
Laplace Transform of Discontinuous Functions
Inverse Laplace transformation
Laplace transformation in differential equations
Mechanical and Electrical Vibrations
Other applications
Return to Sage page for the second course (APMA0340)
Return to the main page (APMA0330)
Return to the Part 1 (Plotting)
Return to the Part 2 (First order ODEs)
Return to the Part 3 (Numerical Methods)
Return to the Part 4 (Second and Higher Order ODEs)
Return to the Part 5 (Series and Recurrences)
Return to the Part 6 (Laplace Transform)
Return to the Part 7 (Boundary Value Problems)