945 resultados para mean field independent component analysis
Resumo:
INTRODUCTION: Objective assessment of motor skills has become an important challenge in minimally invasive surgery (MIS) training.Currently, there is no gold standard defining and determining the residents' surgical competence.To aid in the decision process, we analyze the validity of a supervised classifier to determine the degree of MIS competence based on assessment of psychomotor skills METHODOLOGY: The ANFIS is trained to classify performance in a box trainer peg transfer task performed by two groups (expert/non expert). There were 42 participants included in the study: the non-expert group consisted of 16 medical students and 8 residents (< 10 MIS procedures performed), whereas the expert group consisted of 14 residents (> 10 MIS procedures performed) and 4 experienced surgeons. Instrument movements were captured by means of the Endoscopic Video Analysis (EVA) tracking system. Nine motion analysis parameters (MAPs) were analyzed, including time, path length, depth, average speed, average acceleration, economy of area, economy of volume, idle time and motion smoothness. Data reduction was performed by means of principal component analysis, and then used to train the ANFIS net. Performance was measured by leave one out cross validation. RESULTS: The ANFIS presented an accuracy of 80.95%, where 13 experts and 21 non-experts were correctly classified. Total root mean square error was 0.88, while the area under the classifiers' ROC curve (AUC) was measured at 0.81. DISCUSSION: We have shown the usefulness of ANFIS for classification of MIS competence in a simple box trainer exercise. The main advantage of using ANFIS resides in its continuous output, which allows fine discrimination of surgical competence. There are, however, challenges that must be taken into account when considering use of ANFIS (e.g. training time, architecture modeling). Despite this, we have shown discriminative power of ANFIS for a low-difficulty box trainer task, regardless of the individual significances between MAPs. Future studies are required to confirm the findings, inclusion of new tasks, conditions and sample population.
Resumo:
La predicción de energía eólica ha desempeñado en la última década un papel fundamental en el aprovechamiento de este recurso renovable, ya que permite reducir el impacto que tiene la naturaleza fluctuante del viento en la actividad de diversos agentes implicados en su integración, tales como el operador del sistema o los agentes del mercado eléctrico. Los altos niveles de penetración eólica alcanzados recientemente por algunos países han puesto de manifiesto la necesidad de mejorar las predicciones durante eventos en los que se experimenta una variación importante de la potencia generada por un parque o un conjunto de ellos en un tiempo relativamente corto (del orden de unas pocas horas). Estos eventos, conocidos como rampas, no tienen una única causa, ya que pueden estar motivados por procesos meteorológicos que se dan en muy diferentes escalas espacio-temporales, desde el paso de grandes frentes en la macroescala a procesos convectivos locales como tormentas. Además, el propio proceso de conversión del viento en energía eléctrica juega un papel relevante en la ocurrencia de rampas debido, entre otros factores, a la relación no lineal que impone la curva de potencia del aerogenerador, la desalineación de la máquina con respecto al viento y la interacción aerodinámica entre aerogeneradores. En este trabajo se aborda la aplicación de modelos estadísticos a la predicción de rampas a muy corto plazo. Además, se investiga la relación de este tipo de eventos con procesos atmosféricos en la macroescala. Los modelos se emplean para generar predicciones de punto a partir del modelado estocástico de una serie temporal de potencia generada por un parque eólico. Los horizontes de predicción considerados van de una a seis horas. Como primer paso, se ha elaborado una metodología para caracterizar rampas en series temporales. La denominada función-rampa está basada en la transformada wavelet y proporciona un índice en cada paso temporal. Este índice caracteriza la intensidad de rampa en base a los gradientes de potencia experimentados en un rango determinado de escalas temporales. Se han implementado tres tipos de modelos predictivos de cara a evaluar el papel que juega la complejidad de un modelo en su desempeño: modelos lineales autorregresivos (AR), modelos de coeficientes variables (VCMs) y modelos basado en redes neuronales (ANNs). Los modelos se han entrenado en base a la minimización del error cuadrático medio y la configuración de cada uno de ellos se ha determinado mediante validación cruzada. De cara a analizar la contribución del estado macroescalar de la atmósfera en la predicción de rampas, se ha propuesto una metodología que permite extraer, a partir de las salidas de modelos meteorológicos, información relevante para explicar la ocurrencia de estos eventos. La metodología se basa en el análisis de componentes principales (PCA) para la síntesis de la datos de la atmósfera y en el uso de la información mutua (MI) para estimar la dependencia no lineal entre dos señales. Esta metodología se ha aplicado a datos de reanálisis generados con un modelo de circulación general (GCM) de cara a generar variables exógenas que posteriormente se han introducido en los modelos predictivos. Los casos de estudio considerados corresponden a dos parques eólicos ubicados en España. Los resultados muestran que el modelado de la serie de potencias permitió una mejora notable con respecto al modelo predictivo de referencia (la persistencia) y que al añadir información de la macroescala se obtuvieron mejoras adicionales del mismo orden. Estas mejoras resultaron mayores para el caso de rampas de bajada. Los resultados también indican distintos grados de conexión entre la macroescala y la ocurrencia de rampas en los dos parques considerados. Abstract One of the main drawbacks of wind energy is that it exhibits intermittent generation greatly depending on environmental conditions. Wind power forecasting has proven to be an effective tool for facilitating wind power integration from both the technical and the economical perspective. Indeed, system operators and energy traders benefit from the use of forecasting techniques, because the reduction of the inherent uncertainty of wind power allows them the adoption of optimal decisions. Wind power integration imposes new challenges as higher wind penetration levels are attained. Wind power ramp forecasting is an example of such a recent topic of interest. The term ramp makes reference to a large and rapid variation (1-4 hours) observed in the wind power output of a wind farm or portfolio. Ramp events can be motivated by a broad number of meteorological processes that occur at different time/spatial scales, from the passage of large-scale frontal systems to local processes such as thunderstorms and thermally-driven flows. Ramp events may also be conditioned by features related to the wind-to-power conversion process, such as yaw misalignment, the wind turbine shut-down and the aerodynamic interaction between wind turbines of a wind farm (wake effect). This work is devoted to wind power ramp forecasting, with special focus on the connection between the global scale and ramp events observed at the wind farm level. The framework of this study is the point-forecasting approach. Time series based models were implemented for very short-term prediction, this being characterised by prediction horizons up to six hours ahead. As a first step, a methodology to characterise ramps within a wind power time series was proposed. The so-called ramp function is based on the wavelet transform and it provides a continuous index related to the ramp intensity at each time step. The underlying idea is that ramps are characterised by high power output gradients evaluated under different time scales. A number of state-of-the-art time series based models were considered, namely linear autoregressive (AR) models, varying-coefficient models (VCMs) and artificial neural networks (ANNs). This allowed us to gain insights into how the complexity of the model contributes to the accuracy of the wind power time series modelling. The models were trained in base of a mean squared error criterion and the final set-up of each model was determined through cross-validation techniques. In order to investigate the contribution of the global scale into wind power ramp forecasting, a methodological proposal to identify features in atmospheric raw data that are relevant for explaining wind power ramp events was presented. The proposed methodology is based on two techniques: principal component analysis (PCA) for atmospheric data compression and mutual information (MI) for assessing non-linear dependence between variables. The methodology was applied to reanalysis data generated with a general circulation model (GCM). This allowed for the elaboration of explanatory variables meaningful for ramp forecasting that were utilized as exogenous variables by the forecasting models. The study covered two wind farms located in Spain. All the models outperformed the reference model (the persistence) during both ramp and non-ramp situations. Adding atmospheric information had a noticeable impact on the forecasting performance, specially during ramp-down events. Results also suggested different levels of connection between the ramp occurrence at the wind farm level and the global scale.
Resumo:
En muchas áreas de la ingeniería, la integridad y confiabilidad de las estructuras son aspectos de extrema importancia. Estos son controlados mediante el adecuado conocimiento de danos existentes. Típicamente, alcanzar el nivel de conocimiento necesario que permita caracterizar la integridad estructural implica el uso de técnicas de ensayos no destructivos. Estas técnicas son a menudo costosas y consumen mucho tiempo. En la actualidad, muchas industrias buscan incrementar la confiabilidad de las estructuras que emplean. Mediante el uso de técnicas de última tecnología es posible monitorizar las estructuras y en algunos casos, es factible detectar daños incipientes que pueden desencadenar en fallos catastróficos. Desafortunadamente, a medida que la complejidad de las estructuras, los componentes y sistemas incrementa, el riesgo de la aparición de daños y fallas también incrementa. Al mismo tiempo, la detección de dichas fallas y defectos se torna más compleja. En años recientes, la industria aeroespacial ha realizado grandes esfuerzos para integrar los sensores dentro de las estructuras, además de desarrollar algoritmos que permitan determinar la integridad estructural en tiempo real. Esta filosofía ha sido llamada “Structural Health Monitoring” (o “Monitorización de Salud Estructural” en español) y este tipo de estructuras han recibido el nombre de “Smart Structures” (o “Estructuras Inteligentes” en español). Este nuevo tipo de estructuras integran materiales, sensores, actuadores y algoritmos para detectar, cuantificar y localizar daños dentro de ellas mismas. Una novedosa metodología para detección de daños en estructuras se propone en este trabajo. La metodología está basada en mediciones de deformación y consiste en desarrollar técnicas de reconocimiento de patrones en el campo de deformaciones. Estas últimas, basadas en PCA (Análisis de Componentes Principales) y otras técnicas de reducción dimensional. Se propone el uso de Redes de difracción de Bragg y medidas distribuidas como sensores de deformación. La metodología se validó mediante pruebas a escala de laboratorio y pruebas a escala real con estructuras complejas. Los efectos de las condiciones de carga variables fueron estudiados y diversos experimentos fueron realizados para condiciones de carga estáticas y dinámicas, demostrando que la metodología es robusta ante condiciones de carga desconocidas. ABSTRACT In many engineering fields, the integrity and reliability of the structures are extremely important aspects. They are controlled by the adequate knowledge of existing damages. Typically, achieving the level of knowledge necessary to characterize the structural integrity involves the usage of nondestructive testing techniques. These are often expensive and time consuming. Nowadays, many industries look to increase the reliability of the structures used. By using leading edge techniques it is possible to monitoring these structures and in some cases, detect incipient damage that could trigger catastrophic failures. Unfortunately, as the complexity of the structures, components and systems increases, the risk of damages and failures also increases. At the same time, the detection of such failures and defects becomes more difficult. In recent years, the aerospace industry has done great efforts to integrate the sensors within the structures and, to develop algorithms for determining the structural integrity in real time. The ‘philosophy’ has being called “Structural Health Monitoring” and these structures have been called “smart structures”. These new types of structures integrate materials, sensors, actuators and algorithms to detect, quantify and locate damage within itself. A novel methodology for damage detection in structures is proposed. The methodology is based on strain measurements and consists in the development of strain field pattern recognition techniques. The aforementioned are based on PCA (Principal Component Analysis) and other dimensional reduction techniques. The use of fiber Bragg gratings and distributed sensing as strain sensors is proposed. The methodology have been validated by using laboratory scale tests and real scale tests with complex structures. The effects of the variable load conditions were studied and several experiments were performed for static and dynamic load conditions, demonstrating that the methodology is robust under unknown load conditions.
Resumo:
The application of the Electro-Mechanical Impedance (EMI) method for damage detection in Structural Health Monitoring has noticeable increased in recent years. EMI method utilizes piezoelectric transducers for directly measuring the mechanical properties of the host structure, obtaining the so called impedance measurement, highly influenced by the variations of dynamic parameters of the structure. These measurements usually contain a large number of frequency points, as well as a high number of dimensions, since each frequency range swept can be considered as an independent variable. That makes this kind of data hard to handle, increasing the computational costs and being substantially time-consuming. In that sense, the Principal Component Analysis (PCA)-based data compression has been employed in this work, in order to enhance the analysis capability of the raw data. Furthermore, a Support Vector Machine (SVM), which has been widespread used in machine learning and pattern recognition fields, has been applied in this study in order to model any possible existing pattern in the PCAcompress data, using for that just the first two Principal Components. Different known non-damaged and damaged measurements of an experimental tested beam were used as training input data for the SVM algorithm, using as test input data the same amount of cases measured in beams with unknown structural health conditions. Thus, the purpose of this work is to demonstrate how, with a few impedance measurements of a beam as raw data, its healthy status can be determined based on pattern recognition procedures.
Resumo:
Esta investigación se centra en determinar los grupos estratégicos (GE) de la industria bancaria venezolana y su influencia sobre el desempeño en el sector, así como su relación con la cobertura y la exclusión geográfica, durante el período 2008-2010. El test M de Box demostró que hubo inestabilidad financiera durante este lapso de tiempo, por ello se evaluó el comportamiento de los GE en cada año de estudio. La muestra se constituyó para el año 2008 por 58 entidades financieras, en el año 2009 por 52 entidades bancarias y para el período 2010 por sólo 39 instituciones. Antes de la aplicación del análisis cluster a las variables de alcance de la estrategia y recursos comprometidos, se realizó un análisis de componentes principales para determinar la relación entre estas variables y detectar valores atípicos; mientras que para distinguir las estrategias que caracterizaron a los grupos se siguió el procedimiento de uso común propuesto por Amel y Rhoades (1988), y se reforzó con la realización de las pruebas de contraste de medias o medianas ANOVA, Scheffé, Kruskal-Wallis y U de Mann-Whitney. Se empleó el paquete estadístico SPSS (versión 15.0) y el software de sistema de información geográfica Arcgis (versión 9.2) para lograr el objetivo propuesto. Los resultados indican que: 1) Al aplicar un procedimiento estadístico es posible detectar gradaciones en la implementación o evasión de las estrategias o del compromiso de recursos por parte de los GE, 2) En momentos de inestabilidad financiera los bancos cambian de estrategia y por tanto de GE, con el fin de obtener un buen desempeño, o al menos sobrevivir, 3) Sólo hubo evidencia parcial de la validez predictiva de los grupos estratégicos, 4) Al menos en Venezuela, los GE bancarios tienden a adoptar una estrategia de cobertura geográfica acorde con su estrategia financiera y, además que, los GE difieren en el nivel de Responsabilidad Social Empresarial en la lucha contra la exclusión financiera geográfica. ABSTRACT This research focuses on identifying strategic groups (SG) of the Venezuelan banking industry and its influence on the performance in the sector and its relationship with geographical coverage and exclusion, during the period 2008-2010. Box M test showed that there was financial instability during this period, so the behavior of SG in each year of study was evaluated. The sample was established for 2008 by 58 financial institutions, in 2009 by 52 banks and for the period 2010 to only 39 institutions. Before applying the cluster analysis variables scope of the strategy and committed resources, principal component analysis was performed to determine the relationship between these variables and outliers; while distinguishing strategies that characterized the group proposed by Amel and Rhoades (1988) commonly used procedure was followed and reinforced by the performance of tests contrast mean or median ANOVA, Scheffé, Kruskal-Wallis and Mann-Whitney. SPSS (version 15.0) and software Arcgis geographic information system (version 9.2) was used to achieve the objective. The results indicate that: 1) By applying a statistical procedure can detect gradations in implementation or avoidance strategies or resource commitment by SG, 2) In times of financial instability banks change their strategy and therefore SG, in order to get a good performance, or at least survive, 3) There was only partial evidence for the predictive validity of strategic groups, 4) At least in Venezuela, banking SG tend to adopt a strategy of geographical coverage according to their financial strategy and also that the SG differ in the level of corporate social responsibility in the fight against financial exclusion Geographic.
Resumo:
Recent experiments using electrical and N-methyl-d-aspartate microstimulation of the spinal cord gray matter and cutaneous stimulation of the hindlimb of spinalized frogs have provided evidence for a modular organization of the frog’s spinal cord circuitry. A “module” is a functional unit in the spinal cord circuitry that generates a specific motor output by imposing a specific pattern of muscle activation. The output of a module can be characterized as a force field: the collection of the isometric forces generated at the ankle over different locations in the leg’s workspace. Different modules can be combined independently so that their force fields linearly sum. The goal of this study was to ascertain whether the force fields generated by the activation of supraspinal structures could result from combinations of a small number of modules. We recorded a set of force fields generated by the electrical stimulation of the vestibular nerve in seven frogs, and we performed a principal component analysis to study the dimensionality of this set. We found that 94% of the total variation of the data is explained by the first five principal components, a result that indicates that the dimensionality of the set of fields evoked by vestibular stimulation is low. This result is compatible with the hypothesis that vestibular fields are generated by combinations of a small number of spinal modules.
Resumo:
Phyllosphere microbial communities were evaluated on leaves of field-grown plant species by culture-dependent and -independent methods. Denaturing gradient gel electrophoresis (DGGE) with 16S rDNA primers generally indicated that microbial community structures were similar on different individuals of the same plant species, but unique on different plant species. Phyllosphere bacteria were identified from Citrus sinesis (cv. Valencia) by using DGGE analysis followed by cloning and sequencing of the dominant rDNA bands. Of the 17 unique sequences obtained, database queries showed only four strains that had been described previously as phyllosphere bacteria. Five of the 17 sequences had 16S similarities lower than 90% to database entries, suggesting that they represent previously undescribed species. In addition, three fungal species were also identified. Very different 16S rDNA DGGE banding profiles were obtained when replicate cv. Valencia leaf samples were cultured in BIOLOG EcoPlates for 4.5 days. All of these rDNA sequences had 97–100% similarity to those of known phyllosphere bacteria, but only two of them matched those identified by the culture independent DGGE analysis. Like other studied ecosystems, microbial phyllosphere communities therefore are more complex than previously thought, based on conventional culture-based methods.
Resumo:
Estudamos transições de fases quânticas em gases bosônicos ultrafrios aprisionados em redes óticas. A física desses sistemas é capturada por um modelo do tipo Bose-Hubbard que, no caso de um sistema sem desordem, em que os átomos têm interação de curto alcance e o tunelamento é apenas entre sítios primeiros vizinhos, prevê a transição de fases quântica superfluido-isolante de Mott (SF-MI) quando a profundidade do potencial da rede ótica é variado. Num primeiro estudo, verificamos como o diagrama de fases dessa transição muda quando passamos de uma rede quadrada para uma hexagonal. Num segundo, investigamos como a desordem modifica essa transição. No estudo com rede hexagonal, apresentamos o diagrama de fases da transição SF-MI e uma estimativa para o ponto crítico do primeiro lobo de Mott. Esses resultados foram obtidos usando o algoritmo de Monte Carlo quântico denominado Worm. Comparamos nossos resultados com os obtidos a partir de uma aproximação de campo médio e com os de um sistema com uma rede ótica quadrada. Ao introduzir desordem no sistema, uma nova fase emerge no diagrama de fases do estado fundamental intermediando a fase superfluida e a isolante de Mott. Essa nova fase é conhecida como vidro de Bose (BG) e a transição de fases quântica SF-BG que ocorre nesse sistema gerou muitas controvérsias desde seus primeiros estudos iniciados no fim dos anos 80. Apesar dos avanços em direção ao entendimento completo desta transição, a caracterização básica das suas propriedades críticas ainda é debatida. O que motivou nosso estudo, foi a publicação de resultados experimentais e numéricos em sistemas tridimensionais [Yu et al. Nature 489, 379 (2012), Yu et al. PRB 86, 134421 (2012)] que violam a lei de escala $\\phi= u z$, em que $\\phi$ é o expoente da temperatura crítica, $z$ é o expoente crítico dinâmico e $ u$ é o expoente do comprimento de correlação. Abordamos essa controvérsia numericamente fazendo uma análise de escalonamento finito usando o algoritmo Worm nas suas versões quântica e clássica. Nossos resultados demonstram que trabalhos anteriores sobre a dependência da temperatura de transição superfluido-líquido normal com o potencial químico (ou campo magnético, em sistemas de spin), $T_c \\propto (\\mu-\\mu_c)^\\phi$, estavam equivocados na interpretação de um comportamento transiente na aproximação da região crítica genuína. Quando os parâmetros do modelo são modificados de maneira a ampliar a região crítica quântica, simulações com ambos os modelos clássico e quântico revelam que a lei de escala $\\phi= u z$ [com $\\phi=2.7(2)$, $z=3$ e $ u = 0.88(5)$] é válida. Também estimamos o expoente crítico do parâmetro de ordem, encontrando $\\beta=1.5(2)$.
Resumo:
Deformable Template models are first applied to track the inner wall of coronary arteries in intravascular ultrasound sequences, mainly in the assistance to angioplasty surgery. A circular template is used for initializing an elliptical deformable model to track wall deformation when inflating a balloon placed at the tip of the catheter. We define a new energy function for driving the behavior of the template and we test its robustness both in real and synthetic images. Finally we introduce a framework for learning and recognizing spatio-temporal geometric constraints based on Principal Component Analysis (eigenconstraints).
Resumo:
We develop a theory to calculate exciton binding energies of both two- and three-dimensional spin polarized exciton gases within a mean field approach. Our method allows the analysis of recent experiments showing the importance of the polarization and intensity of the excitation light on the exciton luminescence of GaAs quantum wells. We study the breaking of the spin degeneracy observed at high exciton density (5×1010 cm2). Energy level splitting between spin +1 and spin -1 is shown to be due to many-body interexcitonic exchange while the spin relaxation time is controlled by intraexciton exchange. © 1996 The American Physical Society.
Resumo:
Ice core data from Antarctica provide detailed insights into the characteristics of past climate, atmospheric circulation, as well as changes in the aerosol load of the atmosphere. We present high-resolution records of soluble calcium (Ca2+), non-sea-salt soluble calcium (nssCa2+), and particulate mineral dust aerosol from the East Antarctic Plateau at a depth resolution of 1 cm, spanning the past 800 000 years. Despite the fact that all three parameters are largely dust-derived, the ratio of nssCa2+ to particulate dust is dependent on the particulate dust concentration itself. We used principal component analysis to extract the joint climatic signal and produce a common high-resolution record of dust flux. This new record is used to identify Antarctic warming events during the past eight glacial periods. The phasing of dust flux and CO2 changes during glacial-interglacial transitions reveals that iron fertilization of the Southern Ocean during the past nine glacial terminations was not the dominant factor in the deglacial rise of CO2 concentrations. Rapid changes in dust flux during glacial terminations and Antarctic warming events point to a rapid response of the southern westerly wind belt in the region of southern South American dust sources on changing climate conditions. The clear lead of these dust changes on temperature rise suggests that an atmospheric reorganization occurred in the Southern Hemisphere before the Southern Ocean warmed significantly.
Resumo:
This study examined the utility of the Attachment Style Questionnaire (ASQ) in an Italian sample of 487 consecutively admitted psychiatric participants and an independent sample of 605 nonclinical participants. Minimum average partial analysis of data from the psychiatric sample supported the hypothesized five-factor structure of the items; furthermore, multiple-group component analysis showed that this five-factor structure was not an artifact of differences in item distributions. The five-factor structure of the ASQ was largely replicated in the nonclinical sample. Furthermore, in both psychiatric and nonclinical samples, a two-factor higher order structure of the ASQ scales was observed. The higher order factors of Avoidance and Anxious Attachment showed meaningful relations with scales assessing parental bonding, but were not redundant with these scales. Multivariate normal mixture analysis supported the hypothesis that adult attachment patterns, as measured by the ASQ, are best considered as dimensional constructs.
Resumo:
Slag composition determines the physical and chemical properties as well as the application performance of molten oxide mixtures. Therefore, it is necessary to establish a routine instrumental technique to produce accurate and precise analytical results for better process and production control. In the present paper, a multi-component analysis technique of powdered metallurgical slag samples by X-ray Fluorescence Spectrometer (XRFS) has been demonstrated. This technique provides rapid and accurate results, with minimum sample preparation. It eliminates the requirement for a fused disc, using briquetted samples protected by a layer of Borax(R). While the use of theoretical alpha coefficients has allowed accurate calibrations to be made using fewer standard samples, the application of pseudo-Voight function to curve fitting makes it possible to resolve overlapped peaks in X-ray spectra that cannot be physically separated. The analytical results of both certified reference materials and industrial slag samples measured using the present technique are comparable to those of the same samples obtained by conventional fused disc measurements.
Resumo:
Edaphic factors affect the quality of onions (Allium cepa). Two experiments were carried out in the field and glasshouse to investigate the effects of N (field: 0, 120 kg ha(-1); glasshouse: 0, 108 kg ha(-1)), S (field: 0, 20 kg ha(-1); glasshouse: 0, 4.35 kg ha(-1)) and soil type (clay, sandy loam) on onion quality. A conducting polymer sensor electronic nose (E-nose) was used to classify onion headspace volatiles. Relative changes in the E-nose sensor resistance ratio (%dR/R) were reduced following N and S fertilisation. A 2D Principal Component Analysis (PCA) of the E-nose data sets accounted for c. 100% of the variations in onion headspace volatiles in both experiments. For the field experiment, E-nose data set clusters for headspace volatiles for no N-added onions overlapped (D-2 = 1.0) irrespective of S treatment. Headspace volatiles of N-fertilised onions for the glasshouse sandy loam also overlapped (D-2 = 1.1) irrespective of S treatment as compared with distinct separations among clusters for the clay soil. N fertilisation significantly (P < 0.01) reduced onion bulb pyruvic acid concentration (flavour) in both experiments. S fertilisation increased pyruvic acid concentration significantly (P < 0.01) in the glasshouse experiment, especially for the clay soil, but had no effect on pyruvic acid concentration in the field. N and S fertilisation significantly (P < 0.01) increased lachrymatory potency (pungency), but reduced total soluble solids (TSS) content in the field experiment. In the glasshouse experiment, N and S had no effect on TSS. TSS content was increased on the clay by 1.2-fold as compared with the sandy loam. Onion tissue N:water-soluble SO42- ratios of between five and eight were associated with greater %dR/R and pyruvic acid concentration values. N did not affect inner bulb tissue microbial load. In contrast, S fertilisation reduced inner bulb tissue microbial load by 80% in the field experiment and between 27% (sandy loam) and 92% (clay) in the glasshouse experiment. Overall, onion bulb quality discriminated by the E-nose responded to N, S and soil type treatments, and reflected their interactions. However, the conventional analytical and sensory measures of onion quality did not correlate with %dR/R.
Resumo:
We present a theoretical analysis of three-dimensional (3D) matter-wave solitons and their stability properties in coupled atomic and molecular Bose-Einstein condensates (BECs). The soliton solutions to the mean-field equations are obtained in an approximate analytical form by means of a variational approach. We investigate soliton stability within the parameter space described by the atom-molecule conversion coupling, the atom-atom s-wave scattering, and the bare formation energy of the molecular species. In terms of ordinary optics, this is analogous to the process of sub- or second-harmonic generation in a quadratic nonlinear medium modified by a cubic nonlinearity, together with a phase mismatch term between the fields. While the possibility of formation of multidimensional spatiotemporal solitons in pure quadratic media has been theoretically demonstrated previously, here we extend this prediction to matter-wave interactions in BEC systems where higher-order nonlinear processes due to interparticle collisions are unavoidable and may not be neglected. The stability of the solitons predicted for repulsive atom-atom interactions is investigated by direct numerical simulations of the equations of motion in a full 3D lattice. Our analysis also leads to a possible technique for demonstrating the ground state of the Schrodinger-Newton and related equations that describe Bose-Einstein condensates with nonlocal interparticle forces.