package optimize

Linear Supertypes
AnyRef, Any
  1. Alphabetic
  2. By inheritance
  1. optimize
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
  1. Public
  2. All

Type Members

  1. class ApproximateGradientFunction[K, T] extends DiffFunction[T]

    Approximates a gradient by finite differences.

  2. trait ApproximateLineSearch extends MinimizingLineSearch

    A line search optimizes a function of one variable without analytic gradient information.

  3. class BacktrackingLineSearch extends ApproximateLineSearch

    Implements the Backtracking Linesearch like that in LBFGS-C (which is (c) 2007-2010 Naoaki Okazaki under BSD)

  4. trait BatchDiffFunction[T] extends DiffFunction[T] with (T, IndexedSeq[Int]) ⇒ Double

    A diff function that supports subsets of the data.

  5. case class BatchSize(size: Int) extends OptimizationOption with Product with Serializable

  6. class CachedBatchDiffFunction[T] extends BatchDiffFunction[T]

  7. class CachedDiffFunction[T] extends DiffFunction[T]

  8. class CompactHessian extends NumericOps[CompactHessian]

  9. abstract class CubicLineSearch extends Logging with MinimizingLineSearch

  10. trait DiffFunction[T] extends StochasticDiffFunction[T]

    Represents a differentiable function whose output is guaranteed to be consistent

  11. class EmpiricalHessian[T] extends AnyRef

    The empirical hessian evaluates the derivative for multiplcation.

  12. sealed class FirstOrderException extends RuntimeException

  13. abstract class FirstOrderMinimizer[T, -DF <: StochasticDiffFunction[T]] extends Minimizer[T, DF] with Logging

  14. class FisherDiffFunction[T] extends SecondOrderFunction[T, FisherMatrix[T]]

  15. class FisherMatrix[T] extends AnyRef

    The Fisher matrix approximates the Hessian by E[grad grad'].

  16. case class L1Regularization(value: Double = 1.0) extends OptimizationOption with Product with Serializable

  17. case class L2Regularization(value: Double = 1.0) extends OptimizationOption with Product with Serializable

  18. class LBFGS[T] extends FirstOrderMinimizer[T, DiffFunction[T]] with Logging

    Port of LBFGS to Scala.

  19. trait LineSearch extends ApproximateLineSearch

    A line search optimizes a function of one variable without analytic gradient information.

  20. class LineSearchFailed extends FirstOrderException

  21. case class MaxIterations(num: Int) extends OptimizationOption with Product with Serializable

  22. trait Minimizer[T, -F] extends AnyRef

    Anything that can minimize a function

  23. trait MinimizingLineSearch extends AnyRef

  24. class NaNHistory extends FirstOrderException

  25. class OWLQN[T] extends LBFGS[T] with Logging

    Implements the Orthant-wise Limited Memory QuasiNewton method, which is a variant of LBFGS that handles L1 regularization.

  26. sealed trait OptimizationOption extends (OptParams) ⇒ OptParams

  27. trait OptimizationPackage[Function, Vector] extends AnyRef

  28. trait OptimizationPackageLowPriority extends AnyRef

  29. class ProjectedQuasiNewton extends FirstOrderMinimizer[DenseVector[Double], DiffFunction[DenseVector[Double]]] with Projecting[DenseVector[Double]] with Logging

  30. trait Projecting[T] extends AnyRef

  31. trait SecondOrderFunction[T, H] extends DiffFunction[T]

    Represents a function for which we can easily compute the Hessian.

  32. class SpectralProjectedGradient[T, -DF <: DiffFunction[T]] extends FirstOrderMinimizer[T, DF] with Projecting[T] with Logging

    SPG is a Spectral Projected Gradient minimizer; it minimizes a differentiable function subject to the optimum being in some set, given by the projection operator projection

  33. class StepSizeOverflow extends FirstOrderException

  34. case class StepSizeScale(alpha: Double = 1.0) extends OptimizationOption with Product with Serializable

  35. class StepSizeUnderflow extends FirstOrderException

  36. class StochasticAveragedGradient[T] extends FirstOrderMinimizer[T, BatchDiffFunction[T]]

  37. trait StochasticDiffFunction[T] extends (T) ⇒ Double

    A differentiable function whose output is not guaranteed to be the same across consecutive invocations.

  38. abstract class StochasticGradientDescent[T] extends FirstOrderMinimizer[T, StochasticDiffFunction[T]] with Logging

    Minimizes a function using stochastic gradient descent

  39. class StrongWolfeLineSearch extends CubicLineSearch

  40. case class Tolerance(fvalTolerance: Double = 1.0E-5, gvalTolerance: Double = 1.0E-6) extends OptimizationOption with Product with Serializable

  41. class TruncatedNewtonMinimizer[T, H] extends Minimizer[T, SecondOrderFunction[T, H]] with Logging

    Implements a TruncatedNewton Trust region method (like Tron).

Value Members

  1. object AdaptiveGradientDescent

    Implements the L2^2 and L1 updates from Duchi et al 2010 Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.

  2. object BatchDiffFunction

  3. object DiffFunction

  4. object EmpiricalHessian

  5. object FirstOrderMinimizer

  6. object FisherMatrix

  7. object GradientTester extends Logging

    Class that compares the computed gradient with an empirical gradient based on finite differences.

  8. object LBFGS

  9. object LineSearch

  10. object OWLQN

  11. object OptimizationOption

  12. object OptimizationPackage

  13. object PreferBatch extends OptimizationOption with Product with Serializable

  14. object PreferOnline extends OptimizationOption with Product with Serializable

  15. object ProjectedQuasiNewton

  16. object SecondOrderFunction

  17. object StochasticGradientDescent

  18. package flow

  19. package linear

  20. def minimize[Objective, Vector](fn: Objective, init: Vector, options: OptimizationOption*)(implicit optimization: OptimizationPackage[Objective, Vector]): Vector

Inherited from AnyRef

Inherited from Any