995 resultados para Response complexity
Resumo:
AIM: The purpose of this study was to examine the effect of intensive practice in table-tennis on perceptual, decision-making and motor-systems. Groups of elite (HL=11), intermediate (LL=6) and control (CC=11) performed tasks of different levels. METHODS: All subjects underwent to reaction-time-test and response-time-test consisting of a pointing task to targets placed at distinct distances (15 and 25-cm) on the right and left sides. The ball speed test in forehand and backhand condition just for HL and LL group. RESULTS: In CC group reaction time was higher compared to HL (P< 0.05) group. In the response-time-test, there was a significant main effect of distance (P< 0.0001) and the tennis-table expertise (P= 0.011). In the ball speed test the HL were constantly faster compared to the LL in both forehand stroke (P< 0.0001) and backhand stroke (P< 0.0001). Overall, the forehand stroke was significantly faster than the backhand stroke. CONCLUSION: We can conclude that table-tennis-players have shorter response-times than non-athletes and the tasks of reaction-time and response-time are incapable to distinguish the performance of well-trained table tennis players of the intermediate player, but the ball speed test seems be able to do it.
Resumo:
Considerable effort has been made in recent years to optimize materials properties for magnetic hyperthermia applications. However, due to the complexity of the problem, several aspects pertaining to the combined influence of the different parameters involved still remain unclear. In this paper, we discuss in detail the role of the magnetic anisotropy on the specific absorption rate of cobalt-ferrite nanoparticles with diameters ranging from 3 to 14 nm. The structural characterization was carried out using x-ray diffraction and Rietveld analysis and all relevant magnetic parameters were extracted from vibrating sample magnetometry. Hyperthermia investigations were performed at 500 kHz with a sinusoidal magnetic field amplitude of up to 68 Oe. The specific absorption rate was investigated as a function of the coercive field, saturation magnetization, particle size, and magnetic anisotropy. The experimental results were also compared with theoretical predictions from the linear response theory and dynamic hysteresis simulations, where exceptional agreement was found in both cases. Our results show that the specific absorption rate has a narrow and pronounced maxima for intermediate anisotropy values. This not only highlights the importance of this parameter but also shows that in order to obtain optimum efficiency in hyperthermia applications, it is necessary to carefully tailor the materials properties during the synthesis process. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4729271]
Resumo:
The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.
Resumo:
P>1. Proliferative kidney disease (PKD) is a disease of salmonid fish caused by the endoparasitic myxozoan, Tetracapsuloides bryosalmonae, which uses freshwater bryozoans as primary hosts. Clinical PKD is characterised by a temperature-dependent proliferative and inflammatory response to parasite stages in the kidney.;2. Evidence that PKD is an emerging disease includes outbreaks in new regions, declines in Swiss brown trout populations and the adoption of expensive practices by fish farms to reduce heavy losses. Disease-related mortality in wild fish populations is almost certainly underestimated because of e.g. oversight, scavenging by wild animals, misdiagnosis and fish stocking.;3. PKD prevalences are spatially and temporally variable, range from 0 to 90-100% and are typically highest in juvenile fish.;4. Laboratory and field studies demonstrate that (i) increasing temperatures enhance disease prevalence, severity and distribution and PKD-related mortality; (ii) eutrophication may promote outbreaks. Both bryozoans and T. bryosalmonae stages in bryozoans undergo temperature- and nutrient-driven proliferation.;5. Tetracapsuloides bryosalmonae is likely to achieve persistent infection of highly clonal bryozoan hosts through vertical transmission, low virulence and host condition-dependent cycling between covert and overt infections. Exploitation of fish hosts entails massive proliferation and spore production by stages that escape the immune response. Many aspects of the parasite's life cycle remain obscure. If infectious stages are produced in all hosts then the complex life cycle includes multiple transmission routes.;6. Patterns of disease outbreaks suggest that background, subclinical infections exist under normal environmental conditions. When conditions change, outbreaks may then occur in regions where infection was hitherto unsuspected.;7. Environmental change is likely to cause PKD outbreaks in more northerly regions as warmer temperatures promote disease development, enhance bryozoan biomass and increase spore production, but may also reduce the geographical range of this unique multihost-parasite system. Coevolutionary dynamics resulting from host-parasite interactions that maximise fitness in previous environments may pose problems for sustainability, particularly in view of extensive declines in salmonid populations and degradation of many freshwater habitats.
Resumo:
Our approaches to the use of EEG studies for the understanding of the pathogenesis of schizophrenic symptoms are presented. The basic assumptions of a heuristic and multifactorial model of the psychobiological brain mechanisms underlying the organization of normal behavior is described and used in order to formulate and test hypotheses about the pathogenesis of schizophrenic behavior using EEG measures. Results from our studies on EEG activity and EEG reactivity (= EEG components of a memory-driven, adaptive, non-unitary orienting response) as analyzed with spectral parameters and "chaotic" dimensionality (correlation dimension) are summarized. Both analysis procedures showed a deviant brain functional organization in never-treated first-episode schizophrenia which, within the framework of the model, suggests as common denominator for the pathogenesis of the symptoms a deviation of working memory, the nature of which is functional and not structural.
Resumo:
Reducing the uncertainties related to blade dynamics by the improvement of the quality of numerical simulations of the fluid structure interaction process is a key for a breakthrough in wind-turbine technology. A fundamental step in that direction is the implementation of aeroelastic models capable of capturing the complex features of innovative prototype blades, so they can be tested at realistic full-scale conditions with a reasonable computational cost. We make use of a code based on a combination of two advanced numerical models implemented in a parallel HPC supercomputer platform: First, a model of the structural response of heterogeneous composite blades, based on a variation of the dimensional reduction technique proposed by Hodges and Yu. This technique has the capacity of reducing the geometrical complexity of the blade section into a stiffness matrix for an equivalent beam. The reduced 1-D strain energy is equivalent to the actual 3-D strain energy in an asymptotic sense, allowing accurate modeling of the blade structure as a 1-D finite-element problem. This substantially reduces the computational effort required to model the structural dynamics at each time step. Second, a novel aerodynamic model based on an advanced implementation of the BEM(Blade ElementMomentum) Theory; where all velocities and forces are re-projected through orthogonal matrices into the instantaneous deformed configuration to fully include the effects of large displacements and rotation of the airfoil sections into the computation of aerodynamic forces. This allows the aerodynamic model to take into account the effects of the complex flexo-torsional deformation that can be captured by the more sophisticated structural model mentioned above. In this thesis we have successfully developed a powerful computational tool for the aeroelastic analysis of wind-turbine blades. Due to the particular features mentioned above in terms of a full representation of the combined modes of deformation of the blade as a complex structural part and their effects on the aerodynamic loads, it constitutes a substantial advancement ahead the state-of-the-art aeroelastic models currently available, like the FAST-Aerodyn suite. In this thesis, we also include the results of several experiments on the NREL-5MW blade, which is widely accepted today as a benchmark blade, together with some modifications intended to explore the capacities of the new code in terms of capturing features on blade-dynamic behavior, which are normally overlooked by the existing aeroelastic models.
Resumo:
Heterogeneous materials are ubiquitous in nature and as synthetic materials. These materials provide unique combination of desirable mechanical properties emerging from its heterogeneities at different length scales. Future structural and technological applications will require the development of advanced light weight materials with superior strength and toughness. Cost effective design of the advanced high performance synthetic materials by tailoring their microstructure is the challenge facing the materials design community. Prior knowledge of structure-property relationships for these materials is imperative for optimal design. Thus, understanding such relationships for heterogeneous materials is of primary interest. Furthermore, computational burden is becoming critical concern in several areas of heterogeneous materials design. Therefore, computationally efficient and accurate predictive tools are highly essential. In the present study, we mainly focus on mechanical behavior of soft cellular materials and tough biological material such as mussel byssus thread. Cellular materials exhibit microstructural heterogeneity by interconnected network of same material phase. However, mussel byssus thread comprises of two distinct material phases. A robust numerical framework is developed to investigate the micromechanisms behind the macroscopic response of both of these materials. Using this framework, effect of microstuctural parameters has been addressed on the stress state of cellular specimens during split Hopkinson pressure bar test. A voronoi tessellation based algorithm has been developed to simulate the cellular microstructure. Micromechanisms (microinertia, microbuckling and microbending) governing macroscopic behavior of cellular solids are investigated thoroughly with respect to various microstructural and loading parameters. To understand the origin of high toughness of mussel byssus thread, a Genetic Algorithm (GA) based optimization framework has been developed. It is found that two different material phases (collagens) of mussel byssus thread are optimally distributed along the thread. These applications demonstrate that the presence of heterogeneity in the system demands high computational resources for simulation and modeling. Thus, Higher Dimensional Model Representation (HDMR) based surrogate modeling concept has been proposed to reduce computational complexity. The applicability of such methodology has been demonstrated in failure envelope construction and in multiscale finite element techniques. It is observed that surrogate based model can capture the behavior of complex material systems with sufficient accuracy. The computational algorithms presented in this thesis will further pave the way for accurate prediction of macroscopic deformation behavior of various class of advanced materials from their measurable microstructural features at a reasonable computational cost.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
The mental speed approach explains individual differences in intelligence by faster information processing in individuals with higher compared to lower intelligence - especially in elementary cognitive tasks (ECTs). One of the most examined ECTs is the Hick paradigm. The present study aimed to contrast reaction time (RT) and P3 latency in a Hick task as predictors of intelligence. Although both, RT and P3 latency, are commonly used as indicators of mental speed, it is also known that they measure different aspects of information processing. Participants were 113 female students. RT and P3 latency were measured while participants completed the Hick task with four levels of complexity. Intelligence was assessed with Cattell's Culture Fair Test. A RT factor and a P3 factor were extracted by employing a PCA across complexity levels. There was no significant correlation between the factors. Commonality analysis was used to determine the proportions of unique and shared variance in intelligence explained by the RT and P3 latency factors. RT and P3 latency explained 5.5% and 5% of unique variance in intelligence. However, the two speed factors did not explain a significant portion of shared variance. This result suggests that RT and P3 latency in the Hick paradigm are measuring different aspects of information processing that explain different parts of variance in intelligence.
Resumo:
The boundary element method is specially well suited for the analysis of the seismic response of valleys of complicated topography and stratigraphy. In this paper the method’s capabilities are illustrated using as an example an irregularity stratified (test site) sedimentary basin that has been modelled using 2D discretization and the Direct Boundary Element Method (DBEM). Site models displaying different levels of complexity are used in practice. The multi-layered model’s seismic response shows generally good agreement with observed data amplification levels, fundamental frequencies and the high spatial variability. Still important features such as the location of high frequencies peaks are missing. Even 2D simplified models reveal important characteristics of the wave field that 1D modelling does not show up.
Resumo:
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Resumo:
Esta investigación recoge un cúmulo de intereses en torno a un modo de generar arquitectura muy específico: La producción de objetos con una forma subyacente no apriorística. Los conocimientos expuestos se apoyan en condiciones del pensamiento reciente que impulsan la ilusión por alimentar la fuente creativa de la arquitectura con otros campos del saber. Los tiempos del conocimiento animista sensible y el conocimiento objetivo de carácter científico son correlativos en la historia pero casi nunca han sido sincrónicos. Representa asimismo un intento por aunar los dos tipos de conocimiento retomando la inercia que ya se presentía a comienzos del siglo XX. Se trata por tanto, de un ensayo sobre la posible anulación de la contraposición entre estos dos mundos para pasar a una complementariedad entre ambos en una sola visión conjunta compartida. Como meta final de esta investigación se presenta el desarrollo de un sistema crítico de análisis para los objetos arquitectónicos que permita una diferenciación entre aquellos que responden a los problemas de manera completa y sincera y aquellos otros que esconden, bajo una superficie consensuada, la falta de un método resolutivo de la complejidad en el presente creativo. La Investigación observa tres grupos de conocimiento diferenciados agrupados en sus capítulos correspondientes: El primer capítulo versa sobre el Impulso Creador. En él se define la necesidad de crear un marco para el individuo creador, aquel que independientemente de las fuerzas sociales del momento presiente que existe algo más allá que está sin resolver. Denominamos aquí “creador rebelde” a un tipo de personaje reconocible a lo largo de la Historia como aquel capaz de reconocer los cambios que ese operan en su presente y que utiliza para descubrir lo nuevo y acercarse algo al origen creativo. En el momento actual ese tipo de personaje es el que intuye o ya ha intuido hace tiempo la existencia de una complejidad creciente no obviable en el pensamiento de este tiempo. El segundo capítulo desarrolla algunas Propiedades de Sistemas de actuación creativa. En él se muestra una investigación que desarrolla un marco de conocimientos científicos muy específicos de nuestro tiempo que la arquitectura, de momento, no ha absorbido ni refleja de manera directa en su manera de crear. Son temas de presencia casi ya mundana en la sociedad pero que se resisten a ser incluidos en los procesos creativos como parte de la conciencia. La mayoría de ellos hablan de precisión, órdenes invisibles, propiedades de la materia o la energía tratados de manera objetiva y apolítica. La meta final supone el acercamiento e incorporación de estos conceptos y propiedades a nuestro mundo sensible unificándolos indisociablemente bajo un solo punto de vista. El último capítulo versa sobre la Complejidad y su capacidad de reducción a lo esencial. Aquí se muestran, a modo de conclusiones, la introducción de varios conceptos para el desarrollo de un sistema crítico hacia la arquitectura de nuestro tiempo. Entre ellos, el de Complejidad Esencial, definido como aquella de carácter inevitable a la hora de responder la arquitectura a los problemas y solicitaciones crecientes a los que se enfrenta en el presente. La Tesis mantiene la importancia de informar sobre la imposibilidad en el estado actual de las cosas de responder de manera sincera con soluciones de carácter simplista y la necesidad, por tanto, de soluciones necesarias de carácter complejo. En este sentido se define asimismo el concepto de Forma Subyacente como herramienta crítica para poder evaluar la respuesta de cada arquitectura y poder tener un sistema y visión crítica sobre lo que es un objeto consistente frente a la situación a la que se enfrenta. Dicha forma subyacente se define como aquella manera de entender conjuntamente y de manera sincrónica aquello que percibimos de manera sensible inseparable de las fuerzas ocultas, creativas, tecnológicas, materiales y energéticas que sustentan la definición y entendimiento de cualquier objeto construido. ABSTRACT This research includes a cluster of interests around a specific way to generate architecture: The production of objects without an a priori underlying form. The knowledge presented is based on current conditions of thought promoting the illusion to feed the creative source of architecture with other fields of knowledge. The sensible animist knowledge and objective scientific knowledge are correlative in history but have rarely been synchronous. This research is also an attempt to combine both types of knowledge to regain the inertia already sensed in the early twentieth century. It is therefore an essay on the annulment of the opposition between these two worlds to move towards complementarities of both in a single shared vision. The ultimate goal of this research is to present the development of a critical analysis system for architectural objects that allows differentiation between those who respond to the problems sincerely and those who hide under an agreed appearance, the lack of a method for solving the complexity of the creative present. The research observes three distinct groups of knowledge contained in their respective chapters: The first chapter deals with the Creative Impulse. In it is defined the need to create a framework for the creative individual who, regardless of the current social forces, forebodes that there is something hidden beyond which is still unresolved. We define the "rebel creator" as a kind of person existing throughout history who is able to recognize the changes operating in its present and use them to discover something new and get closer to the origin of creation. At present, this type of character is the one who intuits the existence of a non obviable increasing complexity in society and thought. The second chapter presents some systems, and their properties, for creative performance. It describes the development of a framework composed of current scientific knowledge that architecture has not yet absorbed or reflected directly in her procedures. These are issues of common presence in society but are still reluctant to be included in the creative processes even if they already belong to the collective consciousness. Most of them talk about accuracy, invisible orders, properties of matter and energy, always treated from an objective and apolitical perspective. The ultimate goal pursues the approach and incorporation of these concepts and properties to the sensible world, inextricably unifying all under a single point of view. The last chapter deals with complexity and the ability to reduce it to the essentials. Here we show, as a conclusion, the introduction of several concepts to develop a critical approach to analyzing the architecture of our time. Among them, the concept of Essential Complexity, defined as one that inevitably arises when architecture responds to the increasing stresses that faces today. The thesis maintains the importance of reporting, in the present state of things, the impossibility to respond openly with simplistic solutions and, therefore, the need for solutions to complex character. In this sense, the concept of Underlying Form is defined as a critical tool to evaluate the response of each architecture and possess a critical system to clarify what is an consistent object facing a certain situation. The underlying form is then defined as a way to synchronously understand what we perceive sensitively inseparable from the hidden forces of creative, technological, material and energetic character that support the definition and understanding of any constructed object.
Resumo:
Analysis of previously published sets of DNA microarray gene expression data by singular value decomposition has uncovered underlying patterns or “characteristic modes” in their temporal profiles. These patterns contribute unequally to the structure of the expression profiles. Moreover, the essential features of a given set of expression profiles are captured using just a small number of characteristic modes. This leads to the striking conclusion that the transcriptional response of a genome is orchestrated in a few fundamental patterns of gene expression change. These patterns are both simple and robust, dominating the alterations in expression of genes throughout the genome. Moreover, the characteristic modes of gene expression change in response to environmental perturbations are similar in such distant organisms as yeast and human cells. This analysis reveals simple regularities in the seemingly complex transcriptional transitions of diverse cells to new states, and these provide insights into the operation of the underlying genetic networks.
Resumo:
Genetic analysis of plant–pathogen interactions has demonstrated that resistance to infection is often determined by the interaction of dominant plant resistance (R) genes and dominant pathogen-encoded avirulence (Avr) genes. It was postulated that R genes encode receptors for Avr determinants. A large number of R genes and their cognate Avr genes have now been analyzed at the molecular level. R gene loci are extremely polymorphic, particularly in sequences encoding amino acids of the leucine-rich repeat motif. A major challenge is to determine how Avr perception by R proteins triggers the plant defense response. Mutational analysis has identified several genes required for the function of specific R proteins. Here we report the identification of Rcr3, a tomato gene required specifically for Cf-2-mediated resistance. We propose that Avr products interact with host proteins to promote disease, and that R proteins “guard” these host components and initiate Avr-dependent plant defense responses.
Resumo:
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.