965 resultados para linear-threshold model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, an advanced technique for the generation of deformation maps using synthetic aperture radar (SAR) data is presented. The algorithm estimates the linear and nonlinear components of the displacement, the error of the digital elevation model (DEM) used to cancel the topographic terms, and the atmospheric artifacts from a reduced set of low spatial resolution interferograms. The pixel candidates are selected from those presenting a good coherence level in the whole set of interferograms and the resulting nonuniform mesh tessellated with the Delauney triangulation to establish connections among them. The linear component of movement and DEM error are estimated adjusting a linear model to the data only on the connections. Later on, this information, once unwrapped to retrieve the absolute values, is used to calculate the nonlinear component of movement and atmospheric artifacts with alternate filtering techniques in both the temporal and spatial domains. The method presents high flexibility with respect to the required number of images and the baselines length. However, better results are obtained with large datasets of short baseline interferograms. The technique has been tested with European Remote Sensing SAR data from an area of Catalonia (Spain) and validated with on-field precise leveling measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we develop a new linear approach to identify the parameters of a moving average (MA) model from the statistics of the output. First, we show that, under some constraints, the impulse response of the system can be expressed as a linear combination of cumulant slices. Then, thisresult is used to obtain a new well-conditioned linear methodto estimate the MA parameters of a non-Gaussian process. Theproposed method presents several important differences withexisting linear approaches. The linear combination of slices usedto compute the MA parameters can be constructed from dif-ferent sets of cumulants of different orders, providing a generalframework where all the statistics can be combined. Further-more, it is not necessary to use second-order statistics (the autocorrelation slice), and therefore the proposed algorithm stillprovides consistent estimates in the presence of colored Gaussian noise. Another advantage of the method is that while mostlinear methods developed so far give totally erroneous estimates if the order is overestimated, the proposed approach doesnot require a previous estimation of the filter order. The simulation results confirm the good numerical conditioning of thealgorithm and the improvement in performance with respect to existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Species distribution models are increasingly used to address conservation questions, so their predictive capacity requires careful evaluation. Previous studies have shown how individual factors used in model construction can affect prediction. Although some factors probably have negligible effects compared to others, their relative effects are largely unknown. 2. We introduce a general "virtual ecologist" framework to study the relative importance of factors involved in the construction of species distribution models. 3. We illustrate the framework by examining the relative importance of five key factors-a missing covariate, spatial autocorrelation due to a dispersal process in presences/absences, sample size, sampling design and modeling technique-in a real study framework based on plants in a mountain landscape at regional scale, and show that, for the parameter values considered here, most of the variation in prediction accuracy is due to sample size and modeling technique. Contrary to repeatedly reported concerns, spatial autocorrelation has only comparatively small effects. 4. This study shows the importance of using a nested statistical framework to evaluate the relative effects of factors that may affect species distribution models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a finite element approximation of a system of partial differential equations describing the coupling between the propagation of electrical potential and large deformations of the cardiac tissue. The underlying mathematical model is based on the active strain assumption, in which it is assumed that a multiplicative decomposition of the deformation tensor into a passive and active part holds, the latter carrying the information of the electrical potential propagation and anisotropy of the cardiac tissue into the equations of either incompressible or compressible nonlinear elasticity, governing the mechanical response of the biological material. In addition, by changing from an Eulerian to a Lagrangian configuration, the bidomain or monodomain equations modeling the evolution of the electrical propagation exhibit a nonlinear diffusion term. Piecewise quadratic finite elements are employed to approximate the displacements field, whereas for pressure, electrical potentials and ionic variables are approximated by piecewise linear elements. Various numerical tests performed with a parallel finite element code illustrate that the proposed model can capture some important features of the electromechanical coupling, and show that our numerical scheme is efficient and accurate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The formation and semiclassical evaporation of two-dimensional black holes is studied in an exactly solvable model. Above a certain threshold energy flux, collapsing matter forms a singularity inside an apparent horizon. As the black hole evaporates the apparent horizon recedes and meets the singularity in a finite proper time. The singularity emerges naked, and future evolution of the geometry requires boundary conditions to be imposed there. There is a natural choice of boundary conditions which matches the evaporated black hole solution onto the linear dilaton vacuum. Below the threshold energy flux no horizon forms and boundary conditions can be imposed where infalling matter is reflected from a timelike boundary. All information is recovered at spatial infinity in this case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene filtering is a useful preprocessing technique often applied to microarray datasets. However, it is no common practice because clear guidelines are lacking and it bears the risk of excluding some potentially relevant genes. In this work, we propose to model microarray data as a mixture of two Gaussian distributions that will allow us to obtain an optimal filter threshold in terms of the gene expression level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. We elaborated a model that predicts the centiles of the 25(OH)D distribution taking into account seasonal variation. Methods. Data from two Swiss population-based studies were used to generate (CoLaus) and validate (Bus Santé) the model. Serum 25(OH)D was measured by ultra high pressure LC-MS/MS and immunoassay. Linear regression models on square-root transformed 25(OH)D values were used to predict centiles of the 25(OH)D distribution. Distribution functions of the observations from the replication set predicted with the model were inspected to assess replication. Results. Overall, 4,912 and 2,537 Caucasians were included in original and replication sets, respectively. Mean (SD) 25(OH)D, age, BMI, and % of men were 47.5 (22.1) nmol/L, 49.8 (8.5) years, 25.6 (4.1) kg/m(2), and 49.3% in the original study. The best model included gender, BMI, and sin-cos functions of measurement day. Sex- and BMI-specific 25(OH)D centile curves as a function of measurement date were generated. The model estimates any centile of the 25(OH)D distribution for given values of sex, BMI, and date and the quantile corresponding to a 25(OH)D measurement. Conclusions. We generated and validated centile curves of 25(OH)D in the general adult Caucasian population. These curves can help rank vitamin D centile independently of when 25(OH)D is measured.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Industry's growing need for higher productivity is placing new demands on mechanisms connected with electrical motors, because these can easily lead to vibration problems due to fast dynamics. Furthermore, the nonlinear effects caused by a motor frequently reduce servo stability, which diminishes the controller's ability to predict and maintain speed. Hence, the flexibility of a mechanism and its control has become an important area of research. The basic approach in control system engineering is to assume that the mechanism connected to a motor is rigid, so that vibrations in the tool mechanism, reel, gripper or any apparatus connected to the motor are not taken into account. This might reduce the ability of the machine system to carry out its assignment and shorten the lifetime of the equipment. Nonetheless, it is usually more important to know how the mechanism, or in other words the load on the motor, behaves. A nonlinear load control method for a permanent magnet linear synchronous motor is developed and implemented in the thesis. The purpose of the controller is to track a flexible load to the desired velocity reference as fast as possible and without awkward oscillations. The control method is based on an adaptive backstepping algorithm with its stability ensured by the Lyapunov stability theorem. As a reference controller for the backstepping method, a hybrid neural controller is introduced in which the linear motor itself is controlled by a conventional PI velocity controller and the vibration of the associated flexible mechanism is suppressed from an outer control loop using a compensation signal from a multilayer perceptron network. To avoid the local minimum problem entailed in neural networks, the initial weights are searched for offline by means of a differential evolution algorithm. The states of a mechanical system for controllers are estimated using the Kalman filter. The theoretical results obtained from the control design are validated with the lumped mass model for a mechanism. Generalization of the mechanism allows the methods derived here to be widely implemented in machine automation. The control algorithms are first designed in a specially introduced nonlinear simulation model and then implemented in the physical linear motor using a DSP (Digital Signal Processor) application. The measurements prove that both controllers are capable of suppressing vibration, but that the backstepping method is superior to others due to its accuracy of response and stability properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop an analytical approach to the susceptible-infected-susceptible epidemic model that allows us to unravel the true origin of the absence of an epidemic threshold in heterogeneous networks. We find that a delicate balance between the number of high degree nodes in the network and the topological distance between them dictates the existence or absence of such a threshold. In particular, small-world random networks with a degree distribution decaying slower than an exponential have a vanishing epidemic threshold in the thermodynamic limit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anthropomorphic model observers are mathe- matical algorithms which are applied to images with the ultimate goal of predicting human signal detection and classification accuracy across varieties of backgrounds, image acquisitions and display conditions. A limitation of current channelized model observers is their inability to handle irregularly-shaped signals, which are common in clinical images, without a high number of directional channels. Here, we derive a new linear model observer based on convolution channels which we refer to as the "Filtered Channel observer" (FCO), as an extension of the channelized Hotelling observer (CHO) and the nonprewhitening with an eye filter (NPWE) observer. In analogy to the CHO, this linear model observer can take the form of a single template with an external noise term. To compare with human observers, we tested signals with irregular and asymmetrical shapes spanning the size of lesions down to those of microcalfications in 4-AFC breast tomosynthesis detection tasks, with three different contrasts for each case. Whereas humans uniformly outperformed conventional CHOs, the FCO observer outperformed humans for every signal with only one exception. Additive internal noise in the models allowed us to degrade model performance and match human performance. We could not match all the human performances with a model with a single internal noise component for all signal shape, size and contrast conditions. This suggests that either the internal noise might vary across signals or that the model cannot entirely capture the human detection strategy. However, the FCO model offers an efficient way to apprehend human observer performance for a non-symmetric signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the thesis is to study the principles of the permanent magnet linear synchronous motor (PMLSM) and to develop a simulator model of direct force controlled PMLSM. The basic motor model is described by the traditional two-axis equations. The end effects, cogging force and friction model are also included into the final motor model. Direct thrust force control of PMLSM is described and modelled. The full system model is proven by comparison with the data provided by the motor manufacturer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This technical note describes the construction of a low-cost optical detector. This device is composed by a high-sensitive linear light sensor (model ILX554) and a microcontroller. The performance of the detector was demonstrated by the detection of emission and Raman spectra of the several atomic systems and the results reproduce those found in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Practical Stochastic Model is a simple and robust method to describe coupled chemical reactions. The connection between this stochastic method and a deterministic method was initially established to understand how the parameters and variables that describe the concentration in both methods were related. It was necessary to define two main concepts to make this connection: the filling of compartments or dilutions and the rate of reaction enhancement. The parameters, variables, and the time of the stochastic methods were scaled with the size of the compartment and were compared with a deterministic method. The deterministic approach was employed as an initial reference to achieve a consistent stochastic result. Finally, an independent robust stochastic method was obtained. This method could be compared with the Stochastic Simulation Algorithm developed by Gillespie, 1977. The Practical Stochastic Model produced absolute values that were essential to describe non-linear chemical reactions with a simple structure, and allowed for a correct description of the chemical kinetics.