Optimization and control table d gaskets

#

This paper develops a computationally efficient algorithm for the Multiple Vehicle Pickup and Delivery Problem (MVPDP) with the objective of minimizing the tour cost incurred while completing the task of pickup and delivery of customers. To this end, this paper constructs a novel 0-1 Integer Quadratic Programming (IQP) problem to exactly solve the MVPDP. Compared to the state-of-the-art Mixed Integer Linear Programming (MILP) formulation of the problem, the one presented here requires fewer constraints and decision variables. To ensure that this IQP formulation of the MVPDP can be solved in a computationally efficient manner, this paper devises a set of sufficient conditions to ensure convexity of this formulation when the integer variables are relaxed. In addition, this paper describes a transformation to map any non-convex IQP formulation of the MVPDP into an equivalent convex one. The superior computational efficacy of this convex IQP method when compared to the state-of-the-art MILP formulation is demonstrated through extensive simulated and real-world experiments.

Series FACTS devices are one of the key enablers for very high penetration of renewables due to their capabilities in continuously controlling power flows on transmission lines. This paper proposes a bilevel optimization model to optimally locate variable series reactor (VSR) and phase shifting transformer (PST) in the transmission network considering high penetration of wind power. The upper level problem seeks to minimize the \textcolorblackinvestment cost on series FACTS, the cost of wind power curtailment and possible load shedding. The lower level problems capture the market clearing under different operating scenarios. Due to the poor scalability of $B\theta$ formulation, the \textslshift factor structure of FACTS allocation is derived. A customized reformulation and decomposition algorithm is designed and implemented to solve the proposed bilevel model with binary variables in both upper and lower levels. Detailed numerical results based on 118-bus system demonstrate the efficient performance of the proposed planning model and the important role of series FACTS for facilitating the integration of wind power.

This paper deals with the problem of clearing sequential electricity markets under uncertainty. We consider the European approach, where reserves are traded separately from energy to meet exogenous reserve requirements. Recently proposed stochastic dispatch models that co-optimize these services provide the most efficient solution in terms of expected operating costs by computing reserve needs endogenously. However, these models are incompatible with existing market designs. This paper proposes a new method to compute reserve requirements that bring the outcome of sequential markets closer to the stochastic energy and reserves co-optimization in terms of cost efficiency. Our method is based on a stochastic bilevel program that implicitly improves the inter-temporal coordination of energy and reserve markets, but remains compatible with the European market design. We use the IEEE 24-Bus test system to illustrate the benefit of intelligently setting operating reserves in single and multiple reserve control zones.

This paper studies closed-loop chance constrained control problems with disturbance feedback (equivalently state feedback) where state and input vectors must remain in a prescribed polytopic safe region with a predefined confidence level. We propose to use a scenario approach where the uncertainty is replaced with a set of random samples (scenarios). Though a standard form of scenario approach is applicable in principle, it typically requires a large number of samples to ensure the required confidence levels. To resolve this drawback, we propose a method to reduce the computational complexity by eliminating the redundant samples and, more importantly, by truncating the less informative samples. Unlike the prior methods that start from the full sample set and remove the less informative samples at each step, we sort the samples in a descending order by first finding the most dominant ones. In this process the importance of each sample is measured via a proper mapping. Then the most dominant samples can be selected based on the allowable computational complexity and the rest of the samples are truncated offline. The truncation error is later compensated for by adjusting the safe regions via properly designed buffers, whose sizes are functions of the feedback gain and the truncation error.