Products
  • Wolfram|One

    The definitive Wolfram Language and notebook experience

  • Mathematica

    The original technical computing environment

  • Wolfram Notebook Assistant + LLM Kit

    All-in-one AI assistance for your Wolfram experience

  • System Modeler
  • Wolfram Player
  • Finance Platform
  • Wolfram Engine
  • Enterprise Private Cloud
  • Application Server
  • Wolfram|Alpha Notebook Edition
  • Wolfram Cloud App
  • Wolfram Player App

More mobile apps

Core Technologies of Wolfram Products

  • Wolfram Language
  • Computable Data
  • Wolfram Notebooks
  • AI & Linguistic Understanding

Deployment Options

  • Wolfram Cloud
  • wolframscript
  • Wolfram Engine Community Edition
  • Wolfram LLM API
  • WSTPServer
  • Wolfram|Alpha APIs

From the Community

  • Function Repository
  • Community Paclet Repository
  • Example Repository
  • Neural Net Repository
  • Prompt Repository
  • Wolfram Demonstrations
  • Data Repository
  • Group & Organizational Licensing
  • All Products
Consulting & Solutions

We deliver solutions for the AI era—combining symbolic computation, data-driven insights and deep technical expertise

  • Data & Computational Intelligence
  • Model-Based Design
  • Algorithm Development
  • Wolfram|Alpha for Business
  • Blockchain Technology
  • Education Technology
  • Quantum Computation

WolframConsulting.com

Wolfram Solutions

  • Data Science
  • Artificial Intelligence
  • Biosciences
  • Healthcare Intelligence
  • Sustainable Energy
  • Control Systems
  • Enterprise Wolfram|Alpha
  • Blockchain Labs

More Wolfram Solutions

Wolfram Solutions For Education

  • Research Universities
  • Colleges & Teaching Universities
  • Junior & Community Colleges
  • High Schools
  • Educational Technology
  • Computer-Based Math

More Solutions for Education

  • Contact Us
Learning & Support

Get Started

  • Wolfram Language Introduction
  • Fast Intro for Programmers
  • Fast Intro for Math Students
  • Wolfram Language Documentation

More Learning

  • Highlighted Core Areas
  • Demonstrations
  • YouTube
  • Daily Study Groups
  • Wolfram Schools and Programs
  • Books

Grow Your Skills

  • Wolfram U

    Courses in computing, science, life and more

  • Community

    Learn, solve problems and share ideas.

  • Blog

    News, views and insights from Wolfram

  • Resources for

    Software Developers

Tech Support

  • Contact Us
  • Support FAQs
  • Support FAQs
  • Contact Us
Company
  • About Wolfram
  • Career Center
  • All Sites & Resources
  • Connect & Follow
  • Contact Us

Work with Us

  • Student Ambassador Initiative
  • Wolfram for Startups
  • Student Opportunities
  • Jobs Using Wolfram Language

Educational Programs for Adults

  • Summer School
  • Winter School

Educational Programs for Youth

  • Middle School Camp
  • High School Research Program
  • Computational Adventures

Read

  • Stephen Wolfram's Writings
  • Wolfram Blog
  • Wolfram Tech | Books
  • Wolfram Media
  • Complex Systems

Educational Resources

  • Wolfram MathWorld
  • Wolfram in STEM/STEAM
  • Wolfram Challenges
  • Wolfram Problem Generator

Wolfram Initiatives

  • Wolfram Science
  • Wolfram Foundation
  • History of Mathematics Project

Events

  • Stephen Wolfram Livestreams
  • Online & In-Person Events
  • Contact Us
  • Connect & Follow
Wolfram|Alpha
  • Your Account
  • User Portal
  • Wolfram Cloud
  • Products
    • Wolfram|One
    • Mathematica
    • Wolfram Notebook Assistant + LLM Kit
    • System Modeler
    • Wolfram Player
    • Finance Platform
    • Wolfram|Alpha Notebook Edition
    • Wolfram Engine
    • Enterprise Private Cloud
    • Application Server
    • Wolfram Cloud App
    • Wolfram Player App

    More mobile apps

    • Core Technologies
      • Wolfram Language
      • Computable Data
      • Wolfram Notebooks
      • AI & Linguistic Understanding
    • Deployment Options
      • Wolfram Cloud
      • wolframscript
      • Wolfram Engine Community Edition
      • Wolfram LLM API
      • WSTPServer
      • Wolfram|Alpha APIs
    • From the Community
      • Function Repository
      • Community Paclet Repository
      • Example Repository
      • Neural Net Repository
      • Prompt Repository
      • Wolfram Demonstrations
      • Data Repository
    • Group & Organizational Licensing
    • All Products
  • Consulting & Solutions

    We deliver solutions for the AI era—combining symbolic computation, data-driven insights and deep technical expertise

    WolframConsulting.com

    Wolfram Solutions

    • Data Science
    • Artificial Intelligence
    • Biosciences
    • Healthcare Intelligence
    • Sustainable Energy
    • Control Systems
    • Enterprise Wolfram|Alpha
    • Blockchain Labs

    More Wolfram Solutions

    Wolfram Solutions For Education

    • Research Universities
    • Colleges & Teaching Universities
    • Junior & Community Colleges
    • High Schools
    • Educational Technology
    • Computer-Based Math

    More Solutions for Education

    • Contact Us
  • Learning & Support

    Get Started

    • Wolfram Language Introduction
    • Fast Intro for Programmers
    • Fast Intro for Math Students
    • Wolfram Language Documentation

    Grow Your Skills

    • Wolfram U

      Courses in computing, science, life and more

    • Community

      Learn, solve problems and share ideas.

    • Blog

      News, views and insights from Wolfram

    • Resources for

      Software Developers
    • Tech Support
      • Contact Us
      • Support FAQs
    • More Learning
      • Highlighted Core Areas
      • Demonstrations
      • YouTube
      • Daily Study Groups
      • Wolfram Schools and Programs
      • Books
    • Support FAQs
    • Contact Us
  • Company
    • About Wolfram
    • Career Center
    • All Sites & Resources
    • Connect & Follow
    • Contact Us

    Work with Us

    • Student Ambassador Initiative
    • Wolfram for Startups
    • Student Opportunities
    • Jobs Using Wolfram Language

    Educational Programs for Adults

    • Summer School
    • Winter School

    Educational Programs for Youth

    • Middle School Camp
    • High School Research Program
    • Computational Adventures

    Read

    • Stephen Wolfram's Writings
    • Wolfram Blog
    • Wolfram Tech | Books
    • Wolfram Media
    • Complex Systems
    • Educational Resources
      • Wolfram MathWorld
      • Wolfram in STEM/STEAM
      • Wolfram Challenges
      • Wolfram Problem Generator
    • Wolfram Initiatives
      • Wolfram Science
      • Wolfram Foundation
      • History of Mathematics Project
    • Events
      • Stephen Wolfram Livestreams
      • Online & In-Person Events
    • Contact Us
    • Connect & Follow
  • Wolfram|Alpha
  • Wolfram Cloud
  • Your Account
  • User Portal
Wolfram Language & System Documentation Center
ConvexOptimization
  • See Also
    • LinearOptimization
    • QuadraticOptimization
    • SecondOrderConeOptimization
    • SemidefiniteOptimization
    • ConicOptimization
    • GeometricOptimization
    • LinearFractionalOptimization
  • Related Guides
    • Convex Optimization
    • Optimization
    • Symbolic Vectors, Matrices and Arrays
  • Tech Notes
    • Optimization Method Framework
    • See Also
      • LinearOptimization
      • QuadraticOptimization
      • SecondOrderConeOptimization
      • SemidefiniteOptimization
      • ConicOptimization
      • GeometricOptimization
      • LinearFractionalOptimization
    • Related Guides
      • Convex Optimization
      • Optimization
      • Symbolic Vectors, Matrices and Arrays
    • Tech Notes
      • Optimization Method Framework

ConvexOptimization[f,cons,vars]

finds values of variables vars that minimize the convex objective function f subject to convex constraints cons.

ConvexOptimization[…,"prop"]

specifies what solution property "prop" should be returned.

Details and Options
Details and Options Details and Options
Examples  
Basic Examples  
Scope  
Basic Uses  
Integer Variables  
Complex Variables  
Primal Model Properties  
Dual Model Properties  
Options  
Method  
PerformanceGoal  
Tolerance  
WorkingPrecision  
Applications  
Basic Modeling Transformations  
Geometry Problems  
Data-Fitting Problems  
Show More Show More
Sum-of-Squares Representation  
Classification Problems  
Facility Location Problems  
Portfolio Optimization  
Image Processing  
See Also
Tech Notes
Related Guides
History
Cite this Page
BUILT-IN SYMBOL
  • See Also
    • LinearOptimization
    • QuadraticOptimization
    • SecondOrderConeOptimization
    • SemidefiniteOptimization
    • ConicOptimization
    • GeometricOptimization
    • LinearFractionalOptimization
  • Related Guides
    • Convex Optimization
    • Optimization
    • Symbolic Vectors, Matrices and Arrays
  • Tech Notes
    • Optimization Method Framework
    • See Also
      • LinearOptimization
      • QuadraticOptimization
      • SecondOrderConeOptimization
      • SemidefiniteOptimization
      • ConicOptimization
      • GeometricOptimization
      • LinearFractionalOptimization
    • Related Guides
      • Convex Optimization
      • Optimization
      • Symbolic Vectors, Matrices and Arrays
    • Tech Notes
      • Optimization Method Framework

ConvexOptimization

ConvexOptimization[f,cons,vars]

finds values of variables vars that minimize the convex objective function f subject to convex constraints cons.

ConvexOptimization[…,"prop"]

specifies what solution property "prop" should be returned.

Details and Options

  • Convex optimization is global nonlinear optimization for convex functions with convex constraints. For convex problems, the global solution can be found.
  • Convex optimization includes many other forms of optimization, including linear optimization, linear-fractional optimization, quadratic optimization, second-order cone optimization, semidefinite optimization and conic optimization.
  • If is concave, ConvexOptimization[-g,cons,vars] will maximize g.
  • Convex optimization finds that solves the following problem:
  • minimize
    subject to constraints
    where
  • Equality constraints of the form may be included in cons.
  • Mixed-integer convex optimization finds and that solve the problem:
  • minimize
    subject to constraints
    where
  • When the objective function is real valued, ConvexOptimization solves problems with x in TemplateBox[{}, Complexes]^n by internally converting to real variables , where and .
  • The variable specification vars should be a list with elements giving variables in one of the following forms:
  • vvariable with name and dimensions inferred
    v∈Realsreal scalar variable
    v∈Integersinteger scalar variable
    v∈Complexescomplex scalar variable
    v∈ℛvector variable restricted to the geometric region
    v∈Vectors[n,dom]vector variable in , or TemplateBox[{}, Complexes]^n
    v∈Matrices[{m,n},dom]matrix variable in , or TemplateBox[{}, Complexes]^(m x n)
  • ConvexOptimization automatically does transformations necessary to find an efficient method to solve the minimization problem.
  • The primal minimization problem as solved has a related maximization problem that is the Lagrangian dual problem. The dual maximum value is always less than or equal to the primal minimum value, so it provides a lower bound. The dual maximizer provides information about the primal problem, including sensitivity of the minimum value to changes in the constraints.
  • The possible solution properties "prop" include:
  • "PrimalMinimizer"a list of variable values that minimizes
    "PrimalMinimizerRules"values for the variables vars={v1,…} that minimize
    "PrimalMinimizerVector"the vector that minimizes
    "PrimalMinimumValue"the minimum value
    "DualMaximizer"the vectors that maximize the dual problem
    "DualMaximumValue"the dual maximum value
    "DualityGap"the difference between the dual and primal optimal values
    "Slack"vectors that convert inequality constraints to equality
    {"prop1","prop2",…} several solution properties
  • The following options may be given:
  • MaxIterationsAutomaticmaximum number of iterations to use
    Method Automaticthe method to use
    PerformanceGoal $PerformanceGoalaspects of performance to try to optimize
    Tolerance Automaticthe tolerance to use for internal comparisons
    WorkingPrecision MachinePrecisionprecision to use in internal computations
  • The option Methodmethod may be used to specify the method to use. Available methods include:
  • Automaticchoose the method automatically
    solvertransform the problem, if possible, to use solver to solve the problem
    "SCS"SCS splitting conic solver
    "CSDP"CSDP semidefinite optimization solver
    "DSDP"DSDP semidefinite optimization solver
    "MOSEK"commercial MOSEK convex optimization solver
    "Gurobi"commercial Gurobi linear and quadratic optimization solver
    "Xpress"commercial Xpress linear and quadratic optimization solver
  • Methodsolver may be used to specify that a particular solver is used so that the dual formulation used corresponds to the formulation documented for solver. Possible solvers are LinearOptimization, LinearFractionalOptimization, QuadraticOptimization, SecondOrderConeOptimization, SemidefiniteOptimization, ConicOptimization and GeometricOptimization.

Examples

open all close all

Basic Examples  (2)

Minimize TemplateBox[{{x, +, {2, y}}}, Abs] subject to linear constraints:

Minimize a matrix norm subject to constraints on some elements:

Scope  (28)

Basic Uses  (12)

Minimize subject to the constraints and :

Several linear inequality constraints can be expressed with VectorGreaterEqual:

Use v>= or \[VectorGreaterEqual] to enter the vector inequality sign :

An equivalent form using scalar inequalities:

Use a vector variable :

The inequality may not be the same as due to possible threading in :

To avoid unintended threading in , use Inactive[Plus]:

Use constant parameter equations to avoid unintended threading in :

VectorGreaterEqual represents a conic inequality with respect to the "NonNegativeCone":

To explicitly specify the dimension of the cone, use {"NonNegativeCone",n}:

Find the solution:

Minimize subject to the constraint :

Specify the constraint using a conic inequality with "NormCone":

Find the solution:

Minimize subject to the positive semidefinite matrix constraint (x 1; 1 y)_(TemplateBox[{2}, SemidefiniteConeList])0:

Find the solution:

Use a vector variable and Indexed[x,i] to specify individual components:

Use Vectors[n,Reals] to specify the dimension of a vector variable when it may be ambiguous:

Specify non-negative constraints using NonNegativeReals ():

An equivalent form using vector inequality :

Maximize the area of a rectangle with perimeter at most 1 and height at most half the width:

When and are positive, the problem can be solved by GeometricOptimization methods:

Using method GeometricOptimization implicitly assumes positivity:

Integer Variables  (4)

Specify integer variables using Integers:

Specify integer domain constraints on vector variables using Vectors[n,Integers]:

Specify non-negative integer domain constraints using NonNegativeIntegers ():

Specify non-positive integer domain constraints using NonPositiveIntegers ():

Complex Variables  (8)

Specify complex variables using Complexes:

Minimize a real objective with complex variables and complex constraints :

Let . Expanding out the constraints into real components gives:

Solve the problem with real-valued objective and complex variables and constraints:

Solve the same problem with real variables and constraints:

Use a quadratic objective with Hermitian matrix and real-valued variables:

Use objective (1/2)Inactive[Dot][Conjugate[x],q,x] with a Hermitian matrix and complex variables:

Use a quadratic constraint with Hermitian matrix and real-valued variables:

Use constraint (1/2)Inactive[Dot][Conjugate[x],q,x]d with a Hermitian matrix and complex variables:

Find the Hermitian matrix with minimum 2-norm (largest singular value) such that the matrix is positive semidefinite:

The minimum for the largest singular value is:

Use a linear matrix inequality constraint a_(0)+a_(1) x_(1)+a_(2) x_(2)>=_(TemplateBox[{2}, SemidefiniteConeList])0 with Hermitian or real symmetric matrices:

The variables in linear matrix inequalities need to be real for the sum to remain Hermitian:

Primal Model Properties  (1)

Minimize over the intersection of a triangle and a disk :

Get the primal minimizer as a vector:

Get the minimal value:

Plot the solution:

Dual Model Properties  (3)

Minimize subject to and :

The dual problem is to maximize subject to :

The primal minimum value and the dual maximum value coincide because of strong duality:

That is the same as having a duality gap of zero. In general, at optimal points:

Get the dual maximum value and dual maximizer directly using solution properties:

The "DualMaximizer" can be obtained with:

The dual maximizer vector partitions match the number and dimensions of the dual cones:

To get the dual format for a particular problem-type solver, specify it as a method option:

Options  (13)

Method  (8)

"SCS" is a splitting conic solver method:

"CSDP" is an interior point method for semidefinite problems:

"DSDP" is an alternative interior point method for semidefinite problems:

"IPOPT" is an interior point method for nonlinear problems:

Different methods have different default tolerances, which affects the accuracy and precision:

Compute exact and approximate solutions:

"SCS" has a default tolerance of :

"CSDP", "DSDP" and "IPOPT" have default tolerances of :

When method "SCS" is specified, it is called with the SCS library default tolerance of 10-3:

With default options, this problem is solved by method "SCS" with tolerance 10-6:

Use methods "CSDP" or "DSDP" for constraints that are converted to semidefinite constraints:

Solve the problem using method "CSDP":

Solve the problem using method "DSDP":

Use method "IPOPT" to obtain accurate solutions when "CSDP" and "DSDP" are not applicable:

"IPOPT" produces more accurate results than "SCS", but is typically much slower:

Compare timing with method "SCS":

PerformanceGoal  (1)

The default value of the option PerformanceGoal is $PerformanceGoal:

Use PerformanceGoal"Quality" to get a more accurate result:

Use PerformanceGoal"Speed" to get a result faster, but at the cost of quality:

Compare the timings:

The "Speed" goal gives a less accurate result:

Tolerance  (2)

A smaller Tolerance setting gives a more precise result:

Compute the exact minimum value with Minimize:

Compute the error in the minimum value with different Tolerance settings:

Visualize the change in minimum value error with respect to tolerance:

A smaller Tolerance setting gives a more precise answer, but may take longer to compute:

The tighter tolerance gives a more precise answer:

WorkingPrecision  (2)

The default working precision is MachinePrecision:

Using WorkingPrecisionInfinity will give an exact solution if possible:

WorkingPrecision other than MachinePrecision and ∞ will try to use a method with extended precision support:

Using WorkingPrecisionAutomatic will try to use the precision of the input problem:

Solve a problem with a quadratic objective using 24-digit precision:

There is currently no method that solves problems with quadratic objectives using exact arithmetic. When the requested precision is not supported, the computation uses machine numbers:

Applications  (30)

Basic Modeling Transformations  (11)

Maximize subject to . Solve a maximization problem by negating the objective function:

Negate the primal minimum value to get the corresponding maximal value:

Minimize subject to . Since the constraint is not convex, use a semidefinite constraint to make the convexity explicit:

A matrix is positive semidefinite if and only if the determinants of all upper-left submatrices are non-negative:

Find the solution:

Minimize subject to , assuming when . Using the auxiliary variable , the objective is to minimize such that :

Check that implies :

A Schur complement condition says that if , a block matrix iff . Therefore, iff . Use Inactive[Plus] for constructing the constraints to avoid threading:

Minimize TemplateBox[{x}, Norm] over an ellipse centered at :

The epigraph transformation can be used to construct a problem with a linear objective and additional variable and constraint:

In this form, the problem can be solved directly with ConicOptimization:

Minimize , where is a nondecreasing function, by instead minimizing . The primal minimizer will remain the same for both problems. Consider minimizing subject to :

The minimum value for can be obtained by applying to the minimum value of :

ConvexOptimization will automatically do this transformation:

Find that minimizes the largest eigenvalue of a symmetric matrix that depends linearly on the decision variables , . The problem can be formulated as a linear matrix inequality since is equivalent to , where is the ^(th) eigenvalue of . Define the linear matrix function :

A real symmetric matrix can be diagonalized with an orthogonal matrix so . Hence iff . Since any , taking , , hence iff . Numerically simulate to show that these formulations are equivalent:

The resulting problem:

Run a Monte Carlo simulation to check the plausibility of the result:

Find that maximizes the smallest eigenvalue of a symmetric matrix that depends linearly on the decision variables . Define the linear matrix function :

The problem can be formulated as linear matrix inequality, since is equivalent to where is the ^(th) eigenvalue of . To maximize , minimize :

Run a Monte Carlo simulation to check the plausibility of the result:

Find that minimizes the difference between the largest and the smallest eigenvalues of a symmetric matrix that depends linearly on the decision variables . Define the linear matrix function :

The problem can be formulated as a linear matrix inequality, since is equivalent to , where is the ^(th) eigenvalue of . Solve the resulting problem:

In this case, the minimum and maximum eigenvalues coincide and the difference is 0:

Minimize the largest (by absolute value) eigenvalue of a symmetric matrix that depends linearly on the decision variables :

The largest eigenvalue satisfies lambda I-A(x,y)>=_(TemplateBox[{2}, SemidefiniteConeList])0. The largest (by absolute value) negative eigenvalue of is the largest eigenvalue of and satisfies lambda I+A(x,y)>=_(TemplateBox[{2}, SemidefiniteConeList])0:

Find that minimizes the largest singular value of a symmetric matrix that depends linearly on the decision variables :

The largest singular value of is the square root of the largest eigenvalue of , and from a preceding example it satisfies , or equivalently (sigma I A(x,y); A(x,y)^T sigma I)_(TemplateBox[{5}, SemidefiniteConeList])0:

Plot the result:

For quadratic sets , which include ellipsoids, quadratic cones and paraboloids, determine whether , where are symmetric matrices, are vectors and scalars:

Assuming that the sets are full dimensional, the S-procedure says that iff there exists some non-negative number such that Visually see that there exists a non-negative :

Use 0 for an objective function since feasibility is a concern. Since λ≥0, it follows that :

Geometry Problems  (8)

Minimize the length of the diagonal of a rectangle of area 4 such that the width plus three times the height is less than 7:

Find the minimum distance between two disks of radius 1 centered at and . Let be a point on disk 1. Let be a point on disk 2. The objective is to minimize subject to constraints :

Visualize the positions of the two points:

The distance between the points is:

Find the half-lengths of the principal axes that maximize the volume of an ellipsoid with a surface area of at most 1:

The surface area can be approximated by:

Maximize the volume area by minimizing its reciprocal:

This is the sphere. Including additional constraints on the axes lengths changes this:

Find the radius and center of a minimal enclosing ball that encompasses a given region:

Minimize the radius subject to the constraints :

Visualize the enclosing ball:

The minimal enclosing ball can be found efficiently using BoundingRegion:

Find the analytic center of a convex polygon. The analytic center is a point that maximizes the product of distances to the constraints:

Each segment of the convex polygon can be represented as intersections of half-planes . Extract the linear inequalities:

The objective is to maximize . Taking and negating the objective, the transformed objective is :

Using auxiliary variable , the transformed objective is subject to the constraint :

Visualize the location of the center:

Test whether an ellipsoid is a subset of another ellipsoid of the form :

Using the S-procedure, it can be shown that ellipse 2 is a subset of ellipse 1 iff :

Check if the condition is satisfied:

Convert the ellipsoids into explicit form and confirm that ellipse 2 is within ellipse 1:

Move ellipsoid 2 such that it overlaps with ellipsoid 1:

A test now shows that the problem is infeasible, indicating that ellipsoid 2 is not a subset of ellipsoid 1:

Find the maximum-area ellipse parametrized as that can be fitted into a convex polygon:

Each segment of the convex polygon can be represented as intersections of half-planes . Extract the linear inequalities:

Applying the parametrization to the half-planes gives . The term . Thus, the constraints are:

Minimizing the area is equivalent to minimizing , which is equivalent to minimizing :

Convert the parametrized ellipse into the explicit form as :

Find the smallest ellipsoid parametrized as {x:TemplateBox[{{{a, ., x}, +, b}}, Norm]<=1} that encompasses a set of points in 3D by minimizing the volume:

For each point , the constraint TemplateBox[{{{a, ., {p, _, i}}, +, b,  }}, Norm]<=1, i=1,2,...,n must be satisfied:

Minimizing the volume is equivalent to minimizing , which is equivalent to minimizing :

Convert the parametrized ellipse into the explicit form :

A bounding ellipsoid, not necessarily minimum volume, can also be found using BoundingRegion:

Data-Fitting Problems  (4)

Minimize subject to the constraints for a given matrix a and vector b:

Fit a cubic curve to discrete data such that the first and last points of the data lie on the curve:

Construct the matrix using DesignMatrix:

Define the constraint so that the first and last points must lie on the curve:

Find the coefficients by minimizing :

Compare fit with data:

Find a fit less sensitive to outliers to nonlinear discrete data by minimizing :

Fit the data using the bases . The interpolating function will be :

Find the solution:

Visualize the fit:

Compare the interpolating function with the reference function:

Find an regularized fit to complex data by minimizing ||a.lambda-b||+sigma TemplateBox[{lambda, 1}, Norm2] for a complex :

Construct the matrix using DesignMatrix, for the basis :

Find the coefficients :

Let be the fit defined as a function of the real and imaginary components of :

Visualize the result for the real component of :

Visualize the results for the imaginary component of :

Sum-of-Squares Representation  (1)

Represent a given polynomial in terms of the sum-of-squares polynomial :

The objective is to find such that , where is a vector of monomials:

Construct the symmetric matrix :

Find the polynomial coefficients of and and make sure they are equal:

Find the elements of :

The quadratic term , where is a lower-triangular matrix obtained from the Cholesky decomposition of :

Compare the sum-of-squares polynomial to the given polynomial:

Classification Problems  (3)

Find a line that separates two groups of points and :

For separation, set 1 must satisfy and set 2 must satisfy :

The objective is to minimize , which gives twice the thickness between and :

The separating line is:

Find a quadratic polynomial that separates two groups of 3D points and :

Construct the quadratic polynomial data matrices for the two sets using DesignMatrix:

For separation, set 1 must satisfy and set 2 must satisfy :

Find the separating polynomial by minimizing :

The polynomial separating the two groups of points is:

Plot the polynomial separating the two datasets:

Separate a given set of points into different groups. This is done by finding the centers for each group by minimizing , where is a given local kernel and is a given penalty parameter:

The kernel is a k-nearest neighbor () function such that , else . For this problem, nearest neighbors are selected:

The objective is:

Find the group centers:

For each data point, there exists a corresponding center. Data belonging to the same group will have the same center value:

Extract and plot the grouped points:

Facility Location Problems  (1)

Find the positions of various cell towers and the range needed to serve clients located at :

Each cell tower consumes power proportional to its range, which is given by . The objective is to minimize the power consumption:

Let be a decision variable indicating that if client is covered by cell tower :

Each cell tower must be located such that its range covers some of the clients:

Each cell tower can cover multiple clients:

Each cell tower has a minimum and maximum coverage:

Collect all the variables:

Find the cell tower positions and their ranges:

Extract cell tower position and range:

Visualize the positions and ranges of the towers with respect to client locations:

Portfolio Optimization  (1)

Find the distribution of capital to invest in six stocks to maximize return while minimizing risk:

The return is given by , where is a vector of expected return value of each individual stock:

The risk is given by ; is a risk-aversion parameter and :

The objective is to maximize return while minimizing risk for a specified risk-aversion parameter:

The effect on market prices of stocks due to the buying and selling of stocks is modeled by , which is modeled by a power cone using the epigraph transformation:

The weights must all be greater than 0 and the weights plus market impact costs must add to 1:

Compute the returns and corresponding risk for a range of risk-aversion parameters:

The optimal over a range of gives an upper-bound envelope on the tradeoff between return and risk:

Compute the weights for a specified number of risk-aversion parameters:

By accounting for the market costs, a diversified portfolio can be obtained for low risk aversion, but when the risk aversion is high, the market impact cost dominates, due to purchasing a less diversified stock:

Image Processing  (1)

Recover a corrupted image by finding an image that is closest under the total variation norm:

Create a corrupted image by randomly deleting 40% of the data points.

The objective is to minimize sum_(i=1)^(n-1)sum_(j=1)^(m-1)sqrt(TemplateBox[{{TemplateBox[{u, {i, +, 1}, j}, IndexedDefault], -, TemplateBox[{u, i, j}, IndexedDefault]}}, Abs]^2+TemplateBox[{{TemplateBox[{u, i, {j, +, 1}}, IndexedDefault], -, TemplateBox[{u, i, j}, IndexedDefault]}}, Abs]^2), where is the image data:

Assume that any nonzero data points TemplateBox[{u, i, j}, IndexedDefault] are uncorrupted. For these positions, set TemplateBox[{u, i, j}, IndexedDefault]=u_(i j)^(orig):

Find the solution and show the restored image:

See Also

LinearOptimization  QuadraticOptimization  SecondOrderConeOptimization  SemidefiniteOptimization  ConicOptimization  GeometricOptimization  LinearFractionalOptimization

Tech Notes

    ▪
  • Optimization Method Framework

Related Guides

    ▪
  • Convex Optimization
  • ▪
  • Optimization
  • ▪
  • Symbolic Vectors, Matrices and Arrays

History

Introduced in 2020 (12.2)

Wolfram Research (2020), ConvexOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/ConvexOptimization.html.

Text

Wolfram Research (2020), ConvexOptimization, Wolfram Language function, https://reference.wolfram.com/language/ref/ConvexOptimization.html.

CMS

Wolfram Language. 2020. "ConvexOptimization." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/ConvexOptimization.html.

APA

Wolfram Language. (2020). ConvexOptimization. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/ConvexOptimization.html

BibTeX

@misc{reference.wolfram_2025_convexoptimization, author="Wolfram Research", title="{ConvexOptimization}", year="2020", howpublished="\url{https://reference.wolfram.com/language/ref/ConvexOptimization.html}", note=[Accessed: 01-December-2025]}

BibLaTeX

@online{reference.wolfram_2025_convexoptimization, organization={Wolfram Research}, title={ConvexOptimization}, year={2020}, url={https://reference.wolfram.com/language/ref/ConvexOptimization.html}, note=[Accessed: 01-December-2025]}

Top
Introduction for Programmers
Introductory Book
Wolfram Function Repository | Wolfram Data Repository | Wolfram Data Drop | Wolfram Language Products
Top
  • Products
  • Wolfram|One
  • Mathematica
  • Notebook Assistant + LLM Kit
  • System Modeler

  • Wolfram|Alpha Notebook Edition
  • Wolfram|Alpha Pro
  • Mobile Apps

  • Wolfram Player
  • Wolfram Engine

  • Volume & Site Licensing
  • Server Deployment Options
  • Consulting
  • Wolfram Consulting
  • Repositories
  • Data Repository
  • Function Repository
  • Community Paclet Repository
  • Neural Net Repository
  • Prompt Repository

  • Wolfram Language Example Repository
  • Notebook Archive
  • Wolfram GitHub
  • Learning
  • Wolfram U
  • Wolfram Language Documentation
  • Webinars & Training
  • Educational Programs

  • Wolfram Language Introduction
  • Fast Introduction for Programmers
  • Fast Introduction for Math Students
  • Books

  • Wolfram Community
  • Wolfram Blog
  • Public Resources
  • Wolfram|Alpha
  • Wolfram Problem Generator
  • Wolfram Challenges

  • Computer-Based Math
  • Computational Thinking
  • Computational Adventures

  • Demonstrations Project
  • Wolfram Data Drop
  • MathWorld
  • Wolfram Science
  • Wolfram Media Publishing
  • Customer Resources
  • Store
  • Product Downloads
  • User Portal
  • Your Account
  • Organization Access

  • Support FAQ
  • Contact Support
  • Company
  • About Wolfram
  • Careers
  • Contact
  • Events
Wolfram Community Wolfram Blog
Legal & Privacy Policy
WolframAlpha.com | WolframCloud.com
© 2025 Wolfram
© 2025 Wolfram | Legal & Privacy Policy |
English