958 resultados para stationary rotation
Resumo:
El objetivo de este proyecto es diseñar un sistema capaz de controlar la velocidad de rotación de un motor DC en función del valor de temperatura obtenido de un sensor. Para ello se generará con un microcontrolador una señal PWM, cuyo ciclo de trabajo estará en función de la temperatura medida. En lo que respecta a la fase de diseño, hay dos partes claramente diferenciadas, relativas al hardware y al software. En cuanto al diseño del hardware puede hacerse a su vez una división en dos partes. En primer lugar, hubo que diseñar la circuitería necesaria para adaptar los niveles de tensión entregados por el sensor de temperatura a los niveles requeridos por ADC, requerido para digitalizar la información para su posterior procesamiento por parte del microcontrolador. Por tanto hubo que diseñar capaz de corregir el offset y la pendiente de la función tensión-temperatura del sensor, a fin de adaptarlo al rango de tensión requerido por el ADC. Por otro lado, hubo que diseñar el circuito encargado de controlar la velocidad de rotación del motor. Este circuito estará basado en un transistor MOSFET en conmutación, controlado mediante una señal PWM como se mencionó anteriormente. De esta manera, al variar el ciclo de trabajo de la señal PWM, variará de manera proporcional la tensión que cae en el motor, y por tanto su velocidad de rotación. En cuanto al diseño del software, se programó el microcontrolador para que generase una señal PWM en uno de sus pines en función del valor entregado por el ADC, a cuya entrada está conectada la tensión obtenida del circuito creado para adaptar la tensión generada por el sensor. Así mismo, se utiliza el microcontrolador para representar el valor de temperatura obtenido en una pantalla LCD. Para este proyecto se eligió una placa de desarrollo mbed, que incluye el microcontrolador integrado, debido a que facilita la tarea del prototipado. Posteriormente se procedió a la integración de ambas partes, y testeado del sistema para comprobar su correcto funcionamiento. Puesto que el resultado depende de la temperatura medida, fue necesario simular variaciones en ésta, para así comprobar los resultados obtenidos a distintas temperaturas. Para este propósito se empleó una bomba de aire caliente. Una vez comprobado el funcionamiento, como último paso se diseñó la placa de circuito impreso. Como conclusión, se consiguió desarrollar un sistema con un nivel de exactitud y precisión aceptable, en base a las limitaciones del sistema. SUMMARY: It is obvious that day by day people’s daily life depends more on technology and science. Tasks tend to be done automatically, making them simpler and as a result, user life is more comfortable. Every single task that can be controlled has an electronic system behind. In this project, a control system based on a microcontroller was designed for a fan, allowing it to go faster when temperature rises or slowing down as the environment gets colder. For this purpose, a microcontroller was programmed to generate a signal, to control the rotation speed of the fan depending on the data acquired from a temperature sensor. After testing the whole design developed in the laboratory, the next step taken was to build a prototype, which allows future improvements in the system that are discussed in the corresponding section of the thesis.
Resumo:
This research proposes a generic methodology for dimensionality reduction upon time-frequency representations applied to the classification of different types of biosignals. The methodology directly deals with the highly redundant and irrelevant data contained in these representations, combining a first stage of irrelevant data removal by variable selection, with a second stage of redundancy reduction using methods based on linear transformations. The study addresses two techniques that provided a similar performance: the first one is based on the selection of a set of the most relevant time?frequency points, whereas the second one selects the most relevant frequency bands. The first methodology needs a lower quantity of components, leading to a lower feature space; but the second improves the capture of the time-varying dynamics of the signal, and therefore provides a more stable performance. In order to evaluate the generalization capabilities of the methodology proposed it has been applied to two types of biosignals with different kinds of non-stationary behaviors: electroencephalographic and phonocardiographic biosignals. Even when these two databases contain samples with different degrees of complexity and a wide variety of characterizing patterns, the results demonstrate a good accuracy for the detection of pathologies, over 98%.The results open the possibility to extrapolate the methodology to the study of other biosignals.
Resumo:
The adequate combination of reduced tillage and crop rotation could increase the viability of dry land agriculture in Mediterrenean zones. Crop simulation models can support to examine various tillage-rotation combinations and explore management scenarios. The decision support system for agrotechnology transfer (DSSAT) (Hoogenboom et al., 2010) provides a suite of crop models suitable for this task. The objective of this work was to simulate the effects of two tillage systems, conventional tillage (ConvT) and no tillage (NoT), and three crop rotations, continuous cereal (CC), fallow-cereal (FallowC) and legume-cereal (LegumeC), under dry conditions, on the cereal yield, soil organic carbon (SOC) and nitrogen (SON) in a 15-year experiment, comparing these simulations with field observations.
Resumo:
Analytical expressions for current to a cylindrical Langmuir probe at rest in unmagnetized plasma are compared with results from both steady-state Vlasov and particle-in-cell simulations. Probe bias potentials that are much greater than plasma temperature (assumed equal for ions and electrons), as of interest for bare conductive tethers, are considered. At a very high bias, both the electric potential and the attracted-species density exhibit complex radial profiles; in particular, the density exhibits a minimum well within the plasma sheath and a maximum closer to the probe. Excellent agreement is found between analytical and numerical results for values of the probe radiusR close to the maximum radius Rmax for orbital-motion-limited (OML) collection at a particular bias in the following number of profile features: the values and positions of density minimum and maximum, position of sheath boundary, and value of a radius characterizing the no-space-charge behavior of a potential near the high-bias probe. Good agreement between the theory and simulations is also found for parametric laws jointly covering the following three characteristic R ranges: sheath radius versus probe radius and bias for Rmax; density minimum versus probe bias for Rmax; and (weakly bias-dependent) current drop below the OML value versus the probe radius for R > Rmax.
Resumo:
Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.
Resumo:
Application of nitrogen (N) fertilizers in agricultural soils increases the risk of N loss to the atmosphere in the form of ammonia (NH3), nitrous oxide (N2O) and nitric oxide (NO)and the water bodies as nitrate (NO3-). The implementation of agricultural management practices can affect these losses. In Mediterranean irrigation systems, the greatest losses of NO3-through leaching occur within the irrigation and the intercropperiod. One way to abate these losses during the intercrop period is the use of cover crops that absorb part of the residual N from the root zone (Gabriel and Quemada, 2011). Moreover, during the following crop, these species could be applied as amendments to the soil, providing both C and N to the soil. This effect of cover and catch crops on decreasing the pool of N potentially lost has focused primarily on NO3-leaching. The aim of this work was to evaluate the effect of cover crops on N2O emission during the in tercrop period in a maize system and its subsequent incorporation into the soil in the following maize crop.
Resumo:
Crop simulation models allow analyzing various tillage-rotation combinations and exploring management scenarios. This study was conducted to test the DSSAT (Decision Support System for Agrotechnology Transfer) modelling system in rainfed semiarid central Spain. The focus is on the combined effect of tillage system and winter cereal-based rotations (cereal/legume/fallow) on the crop yield and soil quality. The observed data come from a 16-year field experiment. The CERES and CROPGRO models, included in DSSAT v4.5, were used to simulate crop growth and yield, and DSSAT- CENTURY was used in the soil organic carbon (SOC) and soil nitrogen (SN) simulations. Genetic coefficients were calibrated using part of the observed data. Field observations showed that barley grain yield was lower for continuous cereal (BB) than for vetch (VB) and fallow (FB) rotations for both tillage systems. The CERES-Barley model also reflected this trend. The model predicted higher yield in the conventional tillage (CT) than in the no tillage (NT) probably due to the higher nitrogen availability in the CT, shown in the simulations. The SOC and SN in the top layer only, were higher in NT than in CT, and decreased with depth in both simulated and observed values. These results suggest that CT-VB and CT-FB were the best combinations for the dry land conditions studied. However, CT presented lower SN and SOC content than NT. This study shows how models can be a useful tool for assessing and predicting crop growth and yield, under different management systems and under specific edapho-climatic conditions. Additional key words: CENTURY model; CERES-Barley; crop simulation models; DSSAT; sequential simula- tion; soil organic carbon.
Resumo:
Conservation tillage and crop rotation have spread during the last decades because promotes several positive effects (increase of soil organic content, reduction of soil erosion, and enhancement of carbon sequestration) (Six et al., 2004). However, these benefits could be partly counterbalanced by negative effects on the release of nitrous oxide (N2O) (Linn and Doran, 1984). There is a lack of data on long-term tillage system study, particularly in Mediterranean agro-ecosystems. The aim of this study was to evaluate the effects of long-term (>17 year) tillage systems (no tillage (NT), minimum tillage (MT) and conventional tillage (CT)); and crop rotation (wheat (W)-vetch (V)-barley (B)) versus wheat monoculture (M) on N2O emissions. Additionally, Yield-scaled N2O emissions (YSNE) and N uptake efficiency (NUpE) were assessed for each treatment.
Resumo:
The deformation and damage mechanisms of carbon fiber-reinforced epoxy laminates deformed in shear were studied by means of X-ray computed tomography. In particular, the evolution of matrix cracking, interply delamination and fiber rotation was ascertained as a function of the applied strain. In order to provide quantitative information, an algorithm was developed to automatically determine the crack density and the fiber orientation from the tomograms. The investigation provided new insights about the complex interaction between the different damage mechanisms (i.e. matrix cracking and interply delamination) as a function of the applied strain, ply thickness and ply location within the laminate as well as quantitative data about the evolution of matrix cracking and fiber rotation during deformation
Resumo:
We present a compact formula for the derivative of a 3-D rotation matrix with respect to its exponential coordinates. A geometric interpretation of the resulting expression is provided, as well as its agreement with other less-compact but better-known formulas. To the best of our knowledge, this simpler formula does not appear anywhere in the literature. We hope by providing this more compact expression to alleviate the common pressure to reluctantly resort to alternative representations in various computational applications simply as a means to avoid the complexity of differential analysis in exponential coordinates.
Resumo:
The calibration results of one anemometer equipped with several rotors, varying their size, were analyzed. In each case, the 30-pulses pert turn output signal of the anemometer was studied using Fourier series decomposition and correlated with the anemometer factor (i.e., the anemometer transfer function). Also, a 3-cup analytical model was correlated to the data resulting from the wind tunnel measurements. Results indicate good correlation between the post-processed output signal and the working condition of the cup anemometer. This correlation was also reflected in the results from the proposed analytical model. With the present work the possibility of remotely checking cup anemometer status, indicating the presence of anomalies and, therefore, a decrease on the wind sensor reliability is revealed.
Availability and uptake of trace elements in a forage rotation under conservation and plough tillage
Resumo:
After 14 years under conventional plough tillage (CT) or conservation minimum tillage (MT), the soil available Al, Fe, Mn, Cu and Zn (0-5, 5-15 and 15-30 cm layers) and their plant uptake were evaluated during two years in a ryegrass-maize forage rotation in NW Spain (t emperate-humid region). The three-way ANOVA showed that trace element concentrations in soil were mainly influenced by sampling date, followed by soil depth and tillage system (35-73 %, 7-58 % and 3- 11 % of variance explained, respectively). Excepting for Fe (CT) and Al (CT and MT), the elemental concentrations decreased with depth, the stratification being stronger under MT. For soil available Al, Fe, Mn and Cu, the concentrations were higher in CT than in MT (5-15 and 15-30 cm layers) or were not affected by tillage system (0-5 cm). In contrast, the available Zn contents were higher in MT than CT at the soil surface and did not differ in deeper layers. The concentration of Al, Fe and Cu in crops were not influenced by tillage system, which explain 22 % of Mn variance in maize (CT > MT in the more humid year) and 18 % of Zn variance in ryegrass (MT > CT in both years). However, in the summer crop (maize) the concentrations of Fe, Mn and Zn tended to be higher in MT than in CT under drought conditions, while the opposite was true in the year without water limitation. Therefore, under the studied conditions of climate, soil, tillage and crop rotation, little influence of tillage system on crop nutritive value would be expected. To minimize the potential deficiency of Zn (maize) and Cu (maize and ryegrass) on crop yields the inclusion of these micro-nutrients in fertilization schedule is reco mmended, as well as liming to alleviate Al toxicity on maize crops.
Resumo:
Long-term conservation tillage can modify vertical distribution of nutrients in soil profiles and alter nutrient availability and yields of crops.
Resumo:
Data of diverse crop rotations from five locations across Europe were distributed to modelers to investigate the capability of models to handle complex crop rotations and management interactions.
Resumo:
The ballast pick-up (or ballast train-induced-wind erosion (BTE)) phenomenon is a limiting factor for the maximum allowed operational train speed. The determination of the conditions for the initiation of the motion of the ballast stones due to the wind gust created by high-speed trains is critical to predict the start of ballast pick-up because, once the motion is initiated, a saltation-like chain reaction can take place. The aim of this paper is to present a model to evaluate the effect of a random aerodynamic impulse on stone motion initiation, and an experimental study performed to check the capability of the proposed model to classify trains by their effect on the ballast due to the flow generated by the trains. A measurement study has been performed at kp 69 + 500 on the Madrid – Barcelona High Speed Line. The obtained results show the feasibility of the proposed method, and contribute to a technique for BTE characterization, which can be relevant for the development of train interoperability standards