879 resultados para finite-time stability
Resumo:
Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems.
(1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control.
(2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.
Resumo:
The Mount Meager Volcanic Complex (MMVC) in south-western British Columbia is a potentially active, hydrothermally altered massif comprising a series of steep, glaciated peaks. Climatic conditions and glacial retreat has led to the further weathering, exposure and de-buttressing of steep slopes composed of weak, unconsolidated material. This has resulted in an increased frequency of landslide events over the past few decades, many of which have dammed the rivers bordering the Complex. The breach of these debris dams presents a risk of flooding to the downstream communities. Preliminary mapping showed there are numerous sites around the Complex where future failure could occur. Some of these areas are currently undergoing progressive slope movement and display features to support this such as anti-scarps and tension cracks. The effect of water infiltration on stability was modelled using the Rocscience program Slide 6.0. The main site of focus was Mount Meager in the south- east of the Complex where the most recent landslide took place. Two profiles through Mount Meager were analysed along with one other location in the northern section of the MMVC, where instability had been detected. The lowest Factor of Safety (FOS) for each profile was displayed and an estimate of the volume which could be generated was deduced. A hazard map showing the inundation zones for various volumes of debris flows was created from simulations using LAHARZ. Results showed the massif is unstable, even before infiltration. Varying the amount of infiltration appears to have no significant impact on the FOS annually implying that small changes of any kind could also trigger failure. Further modelling could be done to assess the impact of infiltration over shorter time scales. The Slide models show the volume of material that could be delivered to the Lillooet River Valley to be of the order of 109 m3 which, based on the LAHARZ simulations, would completely inundate the valley and communities downstream. A major hazard of this is that the removal of such a large amount of material has the potential to trigger an explosive eruption of the geothermal system and renew volcanic activity. Although events of this size are infrequent, there is a significant risk to the communities downstream of the complex.
Design Optimization of Modern Machine-drive Systems for Maximum Fault Tolerant and Optimal Operation
Resumo:
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.
Resumo:
During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.
Resumo:
This thesis presents a system for visually analyzing the electromagnetic fields of the electrical machines in the energy conversion laboratory. The system basically utilizes the finite element method to achieve a real-time effect in the analysis of electrical machines during hands-on experimentation. The system developed is a tool to support the student's understanding of the electromagnetic field by calculating performance measures and operational concepts pertaining to the practical study of electrical machines. Energy conversion courses are fundamental in electrical engineering. The laboratory is conducted oriented to facilitate the practical application of the theory presented in class, enabling the student to use electromagnetic field solutions obtained numerically to calculate performance measures and operating characteristics. Laboratory experiments are utilized to help the students understand the electromagnetic concepts by the use of this visual and interactive analysis system. In this system, this understanding is accomplished while hands-on experimentation takes place in real-time.
Resumo:
Colloidal stability and efficient interfacial charge transfer in semiconductor nanocrystals are of great importance for photocatalytic applications in aqueous solution since they provide long-term functionality and high photocatalytic activity, respectively. However, colloidal stability and interfacial charge transfer efficiency are difficult to optimize simultaneously since the ligand layer often acts as both a shell stabilizing the nanocrystals in colloidal suspension and a barrier reducing the efficiency of interfacial charge transfer. Here, we show that, for cysteine-coated, Pt-decorated CdS nanocrystals and Na2SO3 as hole scavenger, triethanolamine (TEOA) replaces the original cysteine ligands in situ and prolongs the highly efficient and steady H2 evolution period by more than a factor of 10. It is shown that Na2SO3 is consumed during H2 generation while TEOA makes no significant contribution to the H2 generation. An apparent quantum yield of 31.5%, a turnover frequency of 0.11 H2/Pt/s, and an interfacial charge transfer rate faster than 0.3 ps were achieved in the TEOA stabilized system. The short length, branched structure and weak binding of TEOA to CdS as well as sufficient free TEOA in the solution are the keys to enhancing colloidal stability and maintaining efficient interfacial charge transfer at the same time. Additionally, TEOA is commercially available and cheap, and we anticipate that this approach can be widely applied in many photocatalytic applications involving colloidal nanocrystals.
Resumo:
The knowledge about intra- and inter-individual variation can stimulate attempts at description, interpretation and prediction of motor co-ordination (MC). Aim: To analyse change, stability and prediction of motor co-ordination (MC) in children. Subjects and methods: A total of 158 children, 83 boys and 75 girls, aged 6, 7 and 8 years, were evaluated in 2006 and re-evaluated in 2012 at 12, 13 and 14 years of age. MC was assessed through the Kiphard-Schilling’s body co-ordination test and growth, skeletal maturity, physical fitness, fundamental motor skills (FMS), physical activity and socioeconomic status (SES) were measured and/or estimated. Results: Repeated-measures MANOVA indicated that there was a significant effect of group, sex and time on a linear combination of the MC tests. Univariate tests revealed that group 3 (8–14 years) scored significantly better than group 1 (6–12 years) in all MC tests and boys performed better than girls in hopping for height and moving sideways. Scores in MC were also higher at follow-up than at baseline. Inter-age correlations for MC were between 0.15–0.74. Childhood predictors of MC were growth, physical fitness, FMS, physical activity and SES. Biological maturation did not contribute to prediction of MC. Conclusion: MC seemed moderately stable from childhood through adolescence and, additionally, inter-individual predictors at adolescence were growth, FMS, physical fitness, physical activity and SES.
Resumo:
With progressive climate change, the preservation of biodiversity is becoming increasingly important. Only if the gene pool is large enough and requirements of species are diverse, there will be species that can adapt to the changing circumstances. To maintain biodiversity, we must understand the consequences of the various strategies. Mathematical models of population dynamics could provide prognoses. However, a model that would reproduce and explain the mechanisms behind the diversity of species that we observe experimentally and in nature is still needed. A combination of theoretical models with detailed experiments is needed to test biological processes in models and compare predictions with outcomes in reality. In this thesis, several food webs are modeled and analyzed. Among others, models are formulated of laboratory experiments performed in the Zoological Institute of the University of Cologne. Numerical data of the simulations is in good agreement with the real experimental results. Via numerical simulations it can be demonstrated that few assumptions are necessary to reproduce in a model the sustained oscillations of the population size that experiments show. However, analysis indicates that species "thrown together by chance" are not very likely to survive together over long periods. Even larger food nets do not show significantly different outcomes and prove how extraordinary and complicated natural diversity is. In order to produce such a coexistence of randomly selected species—as the experiment does—models require additional information about biological processes or restrictions on the assumptions. Another explanation for the observed coexistence is a slow extinction that takes longer than the observation time. Simulated species survive a comparable period of time before they die out eventually. Interestingly, it can be stated that the same models allow the survival of several species in equilibrium and thus do not follow the so-called competitive exclusion principle. This state of equilibrium is more fragile, however, to changes in nutrient supply than the oscillating coexistence. Overall, the studies show, that having a diverse system means that population numbers are probably oscillating, and on the other hand oscillating population numbers stabilize a food web both against demographic noise as well as against changes of the habitat. Model predictions can certainly not be converted at their face value into policies for real ecosystems. But the stabilizing character of fluctuations should be considered in the regulations of animal populations.
Resumo:
Shearing is the process where sheet metal is mechanically cut between two tools. Various shearing technologies are commonly used in the sheet metal industry, for example, in cut to length lines, slitting lines, end cropping etc. Shearing has speed and cost advantages over competing cutting methods like laser and plasma cutting, but involves large forces on the equipment and large strains in the sheet material. The constant development of sheet metals toward higher strength and formability leads to increased forces on the shearing equipment and tools. Shearing of new sheet materials imply new suitable shearing parameters. Investigations of the shearing parameters through live tests in the production are expensive and separate experiments are time consuming and requires specialized equipment. Studies involving a large number of parameters and coupled effects are therefore preferably performed by finite element based simulations. Accurate experimental data is still a prerequisite to validate such simulations. There is, however, a shortage of accurate experimental data to validate such simulations. In industrial shearing processes, measured forces are always larger than the actual forces acting on the sheet, due to friction losses. Shearing also generates a force that attempts to separate the two tools with changed shearing conditions through increased clearance between the tools as result. Tool clearance is also the most common shearing parameter to adjust, depending on material grade and sheet thickness, to moderate the required force and to control the final sheared edge geometry. In this work, an experimental procedure that provides a stable tool clearance together with accurate measurements of tool forces and tool displacements, was designed, built and evaluated. Important shearing parameters and demands on the experimental set-up were identified in a sensitivity analysis performed with finite element simulations under the assumption of plane strain. With respect to large tool clearance stability and accurate force measurements, a symmetric experiment with two simultaneous shears and internal balancing of forces attempting to separate the tools was constructed. Steel sheets of different strength levels were sheared using the above mentioned experimental set-up, with various tool clearances, sheet clamping and rake angles. Results showed that tool penetration before fracture decreased with increased material strength. When one side of the sheet was left unclamped and free to move, the required shearing force decreased but instead the force attempting to separate the two tools increased. Further, the maximum shearing force decreased and the rollover increased with increased tool clearance. Digital image correlation was applied to measure strains on the sheet surface. The obtained strain fields, together with a material model, were used to compute the stress state in the sheet. A comparison, up to crack initiation, of these experimental results with corresponding results from finite element simulations in three dimensions and at a plane strain approximation showed that effective strains on the surface are representative also for the bulk material. A simple model was successfully applied to calculate the tool forces in shearing with angled tools from forces measured with parallel tools. These results suggest that, with respect to tool forces, a plane strain approximation is valid also at angled tools, at least for small rake angles. In general terms, this study provide a stable symmetric experimental set-up with internal balancing of lateral forces, for accurate measurements of tool forces, tool displacements, and sheet deformations, to study the effects of important shearing parameters. The results give further insight to the strain and stress conditions at crack initiation during shearing, and can also be used to validate models of the shearing process.
Resumo:
Unicellular bottom-heavy swimming microorganisms are usually denser than the fluid in which they swim. In shallow suspensions, the bottom heaviness results in a gravitational torque that orients the cells to swim vertically upwards in the absence of fluid flow. Swimming cells thus accumulate at the upper surface to form a concentrated layer of cells. When the cell concentration is high enough, the layer overturns to form bioconvection patterns. Thin concentrated plumes of cells descend rapidly and cells return to the upper surface in wide, slowly moving upwelling plumes. When there is fluid flow, a second viscous torque is exerted on the swimming cells. The balance between the local shear flow viscous and the gravitational torques determines the cells' swimming direction, (gyrotaxis). In this thesis, the wavelengths of bioconvection patterns are studied experimentally as well as theoretically as follow; First, in aquasystem it is rare to find one species lives individually and when they swim they can form complex patterns. Thus, a protocol for controlled experiments to mix two species of swimming algal cells of \emph{C. rienhardtii} and \emph{C. augustae} is systematically described and images of bioconvection patterns are captured. A method for analysing images using wavelets and extracting the local dominant wavelength in spatially varying patterns is developed. The variation of the patterns as a function of the total concentration and the relative concentration between two species is analysed. Second, the linear stability theory of bioconvection for a suspension of two mixed species is studied. The dispersion relationship is computed using Fourier modes in order to calculate the neutral curves as a function of wavenumbers $k$ and $m$. The neutral curves are plotted to compare the instability onset of the suspension of the two mixed species with the instability onset of each species individually. This study could help us to understand which species contributes the most in the process of pattern formation. Finally, predicting the most unstable wavelength was studied previously around a steady state equilibrium situation. Since assuming steady state equilibrium contradicts with reality, the pattern formation in a layer of finite depth of an evolving basic state is studied using the nonnormal modes approach. The nonnormal modes procedure identifies the optimal initial perturbation that can be obtained for a given time $t$ as well as a given set of parameters and wavenumber $k$. Then, we measure the size of the optimal perturbation as it grows with time considering a range of wavenumbers for the same set of parameters to be able to extract the most unstable wavelength.
Resumo:
The knowledge about intra- and inter-individual variation can stimulate attempts at description, interpretation and prediction of motor co-ordination (MC). Aim: To analyse change, stability and prediction of motor co-ordination (MC) in children. Subjects and methods: A total of 158 children, 83 boys and 75 girls, aged 6, 7 and 8 years, were evaluated in 2006 and re-evaluated in 2012 at 12, 13 and 14 years of age. MC was assessed through the Kiphard-Schilling’s body co-ordination test and growth, skeletal maturity, physical fitness, fundamental motor skills (FMS), physical activity and socioeconomic status (SES) were measured and/or estimated. Results: Repeated-measures MANOVA indicated that there was a significant effect of group, sex and time on a linear combination of the MC tests. Univariate tests revealed that group 3 (8–14 years) scored significantly better than group 1 (6–12 years) in all MC tests and boys performed better than girls in hopping for height and moving sideways. Scores in MC were also higher at follow-up than at baseline. Inter-age correlations for MC were between 0.15–0.74. Childhood predictors of MC were growth, physical fitness, FMS, physical activity and SES. Biological maturation did not contribute to prediction of MC. Conclusion: MC seemed moderately stable from childhood through adolescence and, additionally, inter-individual predictors at adolescence were growth, FMS, physical fitness, physical activity and SES.
Resumo:
Site-specific management (SSM) is a form of precision agriculture whereby decisions on resource application and agronomic practices are improved to better match soil and crop requirements as they vary in the field. SSM enables the identification of regions (homogeneous management zones) within the area delimited by field boundaries. These subfield regions constitute areas that have similar permanent characteristics. Traditional soil and pasture sampling and the necessary laboratory analysis are time-consuming, labour-intensive and cost prohibitive, not viable from a SSM perspective because it needs a large number of soil and pasture samples in order to achieve a good representation of soil properties, nutrient levels and pasture quality and productivity. The main objective of this work was to evaluate technologies which have potential for monitoring aspects related to spatial and temporal variability of soil nutrients and pasture green and dry matter yield (respectively, GM and DM, in kg/ha) and support to decision making for the farmer. Three types of sensors were evaluated in a 7ha pasture experimental field: an electromagnetic induction sensor (“DUALEM 1S”, which measures the soil apparent electrical conductivity, ECa), an active optical sensor ("OptRx®", which measures the NDVI, “Normalized Difference Vegetation Index”) and a capacitance probe ("GrassMaster II" which estimates plant mass). The results indicate the possibility of using a soil electrical conductivity probe as, probably, the best tool for monitoring not only some of the characteristics of the soil, but also those of the pasture, which could represent an important help in simplifying the process of sampling and support SSM decision making, in precision agriculture projects. On the other hand, the significant and very strong correlations obtained between capacitance and NDVI and between any of these parameters and the pasture productivity shows the potential of these tools for monitoring the evolution of spatial and temporal patterns of the vegetative growth of biodiverse pasture, for identifying different plant species and variability in pasture yield in Alentejo dry-land farming systems. These results are relevant for the selection of an adequate sensing system for a particular application and open new perspectives for other works that would allow the testing, calibration and validation of the sensors in a wider range of pasture production conditions, namely the extraordinary diversity of botanical species that are characteristic of the Mediterranean region at the different periods of the year.
Resumo:
A new semi-implicit stress integration algorithm for finite strain plasticity (compatible with hyperelas- ticity) is introduced. Its most distinctive feature is the use of different parameterizations of equilibrium and reference configurations. Rotation terms (nonlinear trigonometric functions) are integrated explicitly and correspond to a change in the reference configuration. In contrast, relative Green–Lagrange strains (which are quadratic in terms of displacements) represent the equilibrium configuration implicitly. In addition, the adequacy of several objective stress rates in the semi-implicit context is studied. We para- metrize both reference and equilibrium configurations, in contrast with the so-called objective stress integration algorithms which use coinciding configurations. A single constitutive framework provides quantities needed by common discretization schemes. This is computationally convenient and robust, as all elements only need to provide pre-established quantities irrespectively of the constitutive model. In this work, mixed strain/stress control is used, as well as our smoothing algorithm for the complemen- tarity condition. Exceptional time-step robustness is achieved in elasto-plastic problems: often fewer than one-tenth of the typical number of time increments can be used with a quantifiable effect in accuracy. The proposed algorithm is general: all hyperelastic models and all classical elasto-plastic models can be employed. Plane-stress, Shell and 3D examples are used to illustrate the new algorithm. Both isotropic and anisotropic behavior is presented in elasto-plastic and hyperelastic examples.
Resumo:
We develop an algorithm and computational implementation for simulation of problems that combine Cahn–Hilliard type diffusion with finite strain elasticity. We have in mind applications such as the electro-chemo- mechanics of lithium ion (Li-ion) batteries. We concentrate on basic computational aspects. A staggered algorithm is pro- posed for the coupled multi-field model. For the diffusion problem, the fourth order differential equation is replaced by a system of second order equations to deal with the issue of the regularity required for the approximation spaces. Low order finite elements are used for discretization in space of the involved fields (displacement, concentration, nonlocal concentration). Three (both 2D and 3D) extensively worked numerical examples show the capabilities of our approach for the representation of (i) phase separation, (ii) the effect of concentration in deformation and stress, (iii) the effect of Electronic supplementary material The online version of this article (doi:10.1007/s00466-015-1235-1) contains supplementary material, which is available to authorized users. B P. Areias pmaa@uevora.pt 1 Department of Physics, University of Évora, Colégio Luís António Verney, Rua Romão Ramalho, 59, 7002-554 Évora, Portugal 2 ICIST, Lisbon, Portugal 3 School of Engineering, Universidad de Cuenca, Av. 12 de Abril s/n. 01-01-168, Cuenca, Ecuador 4 Institute of Structural Mechanics, Bauhaus-University Weimar, Marienstraße 15, 99423 Weimar, Germany strain in concentration, and (iv) lithiation. We analyze con- vergence with respect to spatial and time discretization and found that very good results are achievable using both a stag- gered scheme and approximated strain interpolation.
Resumo:
This thesis provides a necessary and sufficient condition for asymptotic efficiency of a nonparametric estimator of the generalised autocovariance function of a Gaussian stationary random process. The generalised autocovariance function is the inverse Fourier transform of a power transformation of the spectral density, and encompasses the traditional and inverse autocovariance functions. Its nonparametric estimator is based on the inverse discrete Fourier transform of the same power transformation of the pooled periodogram. The general result is then applied to the class of Gaussian stationary ARMA processes and its implications are discussed. We illustrate that for a class of contrast functionals and spectral densities, the minimum contrast estimator of the spectral density satisfies a Yule-Walker system of equations in the generalised autocovariance estimator. Selection of the pooling parameter, which characterizes the nonparametric estimator of the generalised autocovariance, controlling its resolution, is addressed by using a multiplicative periodogram bootstrap to estimate the finite-sample distribution of the estimator. A multivariate extension of recently introduced spectral models for univariate time series is considered, and an algorithm for the coefficients of a power transformation of matrix polynomials is derived, which allows to obtain the Wold coefficients from the matrix coefficients characterizing the generalised matrix cepstral models. This algorithm also allows the definition of the matrix variance profile, providing important quantities for vector time series analysis. A nonparametric estimator based on a transformation of the smoothed periodogram is proposed for estimation of the matrix variance profile.