900 resultados para Field-based model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Facilitation is a major force shaping the structure and diversity of plant communities in terrestrial ecosystems. Detecting positive plant–plant interactions relies on the combination of field experimentation and the demonstration of spatial association between neighboring plants. This has often restricted the study of facilitation to particular sites, limiting the development of systematic assessments of facilitation over regional and global scales. Here we explore whether the frequency of plant spatial associations detected from high-resolution remotely sensed images can be used to infer plant facilitation at the community level in drylands around the globe. We correlated the information from remotely sensed images freely available through Google Earth with detailed field assessments, and used a simple individual-based model to generate patch-size distributions using different assumptions about the type and strength of plant–plant interactions. Most of the patterns found from the remotely sensed images were more right skewed than the patterns from the null model simulating a random distribution. This suggests that the plants in the studied drylands show stronger spatial clustering than expected by chance. We found that positive plant co-occurrence, as measured in the field, was significantly related to the skewness of vegetation patch-size distribution measured using Google Earth images. Our findings suggest that the relative frequency of facilitation may be inferred from spatial pattern signals measured from remotely sensed images, since facilitation often determines positive co-occurrence among neighboring plants. They pave the road for a systematic global assessment of the role of facilitation in terrestrial ecosystems. Read More: http://www.esajournals.org/doi/10.1890/14-2358.1

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Leaf nitrogen and leaf surface area influence the exchange of gases between terrestrial ecosystems and the atmosphere, and play a significant role in the global cycles of carbon, nitrogen and water. The purpose of this study is to use field-based and satellite remote-sensing-based methods to assess leaf nitrogen pools in five diverse European agricultural landscapes located in Denmark, Scotland (United Kingdom), Poland, the Netherlands and Italy. REGFLEC (REGularized canopy reFLECtance) is an advanced image-based inverse canopy radiative transfer modelling system which has shown proficiency for regional mapping of leaf area index (LAI) and leaf chlorophyll (CHLl) using remote sensing data. In this study, high spatial resolution (10–20 m) remote sensing images acquired from the multispectral sensors aboard the SPOT (Satellite For Observation of Earth) satellites were used to assess the capability of REGFLEC for mapping spatial variations in LAI, CHLland the relation to leaf nitrogen (Nl) data in five diverse European agricultural landscapes. REGFLEC is based on physical laws and includes an automatic model parameterization scheme which makes the tool independent of field data for model calibration. In this study, REGFLEC performance was evaluated using LAI measurements and non-destructive measurements (using a SPAD meter) of leaf-scale CHLl and Nl concentrations in 93 fields representing crop- and grasslands of the five landscapes. Furthermore, empirical relationships between field measurements (LAI, CHLl and Nl and five spectral vegetation indices (the Normalized Difference Vegetation Index, the Simple Ratio, the Enhanced Vegetation Index-2, the Green Normalized Difference Vegetation Index, and the green chlorophyll index) were used to assess field data coherence and to serve as a comparison basis for assessing REGFLEC model performance. The field measurements showed strong vertical CHLl gradient profiles in 26% of fields which affected REGFLEC performance as well as the relationships between spectral vegetation indices (SVIs) and field measurements. When the range of surface types increased, the REGFLEC results were in better agreement with field data than the empirical SVI regression models. Selecting only homogeneous canopies with uniform CHLl distributions as reference data for evaluation, REGFLEC was able to explain 69% of LAI observations (rmse = 0.76), 46% of measured canopy chlorophyll contents (rmse = 719 mg m−2) and 51% of measured canopy nitrogen contents (rmse = 2.7 g m−2). Better results were obtained for individual landscapes, except for Italy, where REGFLEC performed poorly due to a lack of dense vegetation canopies at the time of satellite recording. Presence of vegetation is needed to parameterize the REGFLEC model. Combining REGFLEC- and SVI-based model results to minimize errors for a "snap-shot" assessment of total leaf nitrogen pools in the five landscapes, results varied from 0.6 to 4.0 t km−2. Differences in leaf nitrogen pools between landscapes are attributed to seasonal variations, extents of agricultural area, species variations, and spatial variations in nutrient availability. In order to facilitate a substantial assessment of variations in Nl pools and their relation to landscape based nitrogen and carbon cycling processes, time series of satellite data are needed. The upcoming Sentinel-2 satellite mission will provide new multiple narrowband data opportunities at high spatio-temporal resolution which are expected to further improve remote sensing capabilities for mapping LAI, CHLl and Nl.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La creciente complejidad, heterogeneidad y dinamismo inherente a las redes de telecomunicaciones, los sistemas distribuidos y los servicios avanzados de información y comunicación emergentes, así como el incremento de su criticidad e importancia estratégica, requieren la adopción de tecnologías cada vez más sofisticadas para su gestión, su coordinación y su integración por parte de los operadores de red, los proveedores de servicio y las empresas, como usuarios finales de los mismos, con el fin de garantizar niveles adecuados de funcionalidad, rendimiento y fiabilidad. Las estrategias de gestión adoptadas tradicionalmente adolecen de seguir modelos excesivamente estáticos y centralizados, con un elevado componente de supervisión y difícilmente escalables. La acuciante necesidad por flexibilizar esta gestión y hacerla a la vez más escalable y robusta, ha provocado en los últimos años un considerable interés por desarrollar nuevos paradigmas basados en modelos jerárquicos y distribuidos, como evolución natural de los primeros modelos jerárquicos débilmente distribuidos que sucedieron al paradigma centralizado. Se crean así nuevos modelos como son los basados en Gestión por Delegación, en el paradigma de código móvil, en las tecnologías de objetos distribuidos y en los servicios web. Estas alternativas se han mostrado enormemente robustas, flexibles y escalables frente a las estrategias tradicionales de gestión, pero continúan sin resolver aún muchos problemas. Las líneas actuales de investigación parten del hecho de que muchos problemas de robustez, escalabilidad y flexibilidad continúan sin ser resueltos por el paradigma jerárquico-distribuido, y abogan por la migración hacia un paradigma cooperativo fuertemente distribuido. Estas líneas tienen su germen en la Inteligencia Artificial Distribuida (DAI) y, más concretamente, en el paradigma de agentes autónomos y en los Sistemas Multi-agente (MAS). Todas ellas se perfilan en torno a un conjunto de objetivos que pueden resumirse en alcanzar un mayor grado de autonomía en la funcionalidad de la gestión y una mayor capacidad de autoconfiguración que resuelva los problemas de escalabilidad y la necesidad de supervisión presentes en los sistemas actuales, evolucionar hacia técnicas de control fuertemente distribuido y cooperativo guiado por la meta y dotar de una mayor riqueza semántica a los modelos de información. Cada vez más investigadores están empezando a utilizar agentes para la gestión de redes y sistemas distribuidos. Sin embargo, los límites establecidos en sus trabajos entre agentes móviles (que siguen el paradigma de código móvil) y agentes autónomos (que realmente siguen el paradigma cooperativo) resultan difusos. Muchos de estos trabajos se centran en la utilización de agentes móviles, lo cual, al igual que ocurría con las técnicas de código móvil comentadas anteriormente, les permite dotar de un mayor componente dinámico al concepto tradicional de Gestión por Delegación. Con ello se consigue flexibilizar la gestión, distribuir la lógica de gestión cerca de los datos y distribuir el control. Sin embargo se permanece en el paradigma jerárquico distribuido. Si bien continúa sin definirse aún una arquitectura de gestión fiel al paradigma cooperativo fuertemente distribuido, estas líneas de investigación han puesto de manifiesto serios problemas de adecuación en los modelos de información, comunicación y organizativo de las arquitecturas de gestión existentes. En este contexto, la tesis presenta un modelo de arquitectura para gestión holónica de sistemas y servicios distribuidos mediante sociedades de agentes autónomos, cuyos objetivos fundamentales son el incremento del grado de automatización asociado a las tareas de gestión, el aumento de la escalabilidad de las soluciones de gestión, soporte para delegación tanto por dominios como por macro-tareas, y un alto grado de interoperabilidad en entornos abiertos. A partir de estos objetivos se ha desarrollado un modelo de información formal de tipo semántico, basado en lógica descriptiva que permite un mayor grado de automatización en la gestión en base a la utilización de agentes autónomos racionales, capaces de razonar, inferir e integrar de forma dinámica conocimiento y servicios conceptualizados mediante el modelo CIM y formalizados a nivel semántico mediante lógica descriptiva. El modelo de información incluye además un “mapping” a nivel de meta-modelo de CIM al lenguaje de especificación de ontologías OWL, que supone un significativo avance en el área de la representación y el intercambio basado en XML de modelos y meta-información. A nivel de interacción, el modelo aporta un lenguaje de especificación formal de conversaciones entre agentes basado en la teoría de actos ilocucionales y aporta una semántica operacional para dicho lenguaje que facilita la labor de verificación de propiedades formales asociadas al protocolo de interacción. Se ha desarrollado también un modelo de organización holónico y orientado a roles cuyas principales características están alineadas con las demandadas por los servicios distribuidos emergentes e incluyen la ausencia de control central, capacidades de reestructuración dinámica, capacidades de cooperación, y facilidades de adaptación a diferentes culturas organizativas. El modelo incluye un submodelo normativo adecuado al carácter autónomo de los holones de gestión y basado en las lógicas modales deontológica y de acción.---ABSTRACT---The growing complexity, heterogeneity and dynamism inherent in telecommunications networks, distributed systems and the emerging advanced information and communication services, as well as their increased criticality and strategic importance, calls for the adoption of increasingly more sophisticated technologies for their management, coordination and integration by network operators, service providers and end-user companies to assure adequate levels of functionality, performance and reliability. The management strategies adopted traditionally follow models that are too static and centralised, have a high supervision component and are difficult to scale. The pressing need to flexibilise management and, at the same time, make it more scalable and robust recently led to a lot of interest in developing new paradigms based on hierarchical and distributed models, as a natural evolution from the first weakly distributed hierarchical models that succeeded the centralised paradigm. Thus new models based on management by delegation, the mobile code paradigm, distributed objects and web services came into being. These alternatives have turned out to be enormously robust, flexible and scalable as compared with the traditional management strategies. However, many problems still remain to be solved. Current research lines assume that the distributed hierarchical paradigm has as yet failed to solve many of the problems related to robustness, scalability and flexibility and advocate migration towards a strongly distributed cooperative paradigm. These lines of research were spawned by Distributed Artificial Intelligence (DAI) and, specifically, the autonomous agent paradigm and Multi-Agent Systems (MAS). They all revolve around a series of objectives, which can be summarised as achieving greater management functionality autonomy and a greater self-configuration capability, which solves the problems of scalability and the need for supervision that plague current systems, evolving towards strongly distributed and goal-driven cooperative control techniques and semantically enhancing information models. More and more researchers are starting to use agents for network and distributed systems management. However, the boundaries established in their work between mobile agents (that follow the mobile code paradigm) and autonomous agents (that really follow the cooperative paradigm) are fuzzy. Many of these approximations focus on the use of mobile agents, which, as was the case with the above-mentioned mobile code techniques, means that they can inject more dynamism into the traditional concept of management by delegation. Accordingly, they are able to flexibilise management, distribute management logic about data and distribute control. However, they remain within the distributed hierarchical paradigm. While a management architecture faithful to the strongly distributed cooperative paradigm has yet to be defined, these lines of research have revealed that the information, communication and organisation models of existing management architectures are far from adequate. In this context, this dissertation presents an architectural model for the holonic management of distributed systems and services through autonomous agent societies. The main objectives of this model are to raise the level of management task automation, increase the scalability of management solutions, provide support for delegation by both domains and macro-tasks and achieve a high level of interoperability in open environments. Bearing in mind these objectives, a descriptive logic-based formal semantic information model has been developed, which increases management automation by using rational autonomous agents capable of reasoning, inferring and dynamically integrating knowledge and services conceptualised by means of the CIM model and formalised at the semantic level by means of descriptive logic. The information model also includes a mapping, at the CIM metamodel level, to the OWL ontology specification language, which amounts to a significant advance in the field of XML-based model and metainformation representation and exchange. At the interaction level, the model introduces a formal specification language (ACSL) of conversations between agents based on speech act theory and contributes an operational semantics for this language that eases the task of verifying formal properties associated with the interaction protocol. A role-oriented holonic organisational model has also been developed, whose main features meet the requirements demanded by emerging distributed services, including no centralised control, dynamic restructuring capabilities, cooperative skills and facilities for adaptation to different organisational cultures. The model includes a normative submodel adapted to management holon autonomy and based on the deontic and action modal logics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The existing seismic isolation systems are based on well-known and accepted physical principles, but they are still having some functional drawbacks. As an attempt of improvement, the Roll-N-Cage (RNC) isolator has been recently proposed. It is designed to achieve a balance in controlling isolator displacement demands and structural accelerations. It provides in a single unit all the necessary functions of vertical rigid support, horizontal flexibility with enhanced stability, resistance to low service loads and minor vibration, and hysteretic energy dissipation characteristics. It is characterized by two unique features that are a self-braking (buffer) and a self-recentering mechanism. This paper presents an advanced representation of the main and unique features of the RNC isolator using an available finite element code called SAP2000. The validity of the obtained SAP2000 model is then checked using experimental, numerical and analytical results. Then, the paper investigates the merits and demerits of activating the built-in buffer mechanism on both structural pounding mitigation and isolation efficiency. The paper addresses the problem of passive alleviation of possible inner pounding within the RNC isolator, which may arise due to the activation of its self-braking mechanism under sever excitations such as near-fault earthquakes. The results show that the obtained finite element code-based model can closely match and accurately predict the overall behavior of the RNC isolator with effectively small errors. Moreover, the inherent buffer mechanism of the RNC isolator could mitigate or even eliminate direct structure-tostructure pounding under severe excitation considering limited septation gaps between adjacent structures. In addition, the increase of inherent hysteretic damping of the RNC isolator can efficiently limit its peak displacement together with the severity of the possibly developed inner pounding and, therefore, alleviate or even eliminate the possibly arising negative effects of the buffer mechanism on the overall RNC-isolated structural responses.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En este trabajo se aborda una cuestión central en el diseño en carga última de estructuras de hormigón armado y de fábrica: la posibilidad efectiva de que las deformaciones plásticas necesarias para verificar un estado de rotura puedan ser alcanzadas por las regiones de la estructura que deban desarrollar su capacidad última para verificar tal estado. Así, se parte de las decisiones de diseño que mediante mera estática aseguran un equilibrio de la estructura para las cargas últimas que deba resistir, pero determinando directamente el valor de las deformaciones necesarias para llegar a tal estado. Por tanto, no se acude a los teoremas de rotura sin más, sino que se formula el problema desde un punto de vista elastoplástico. Es decir, no se obvia el recorrido que la estructura deba realizar en un proceso de carga incremental monótono, de modo que las regiones no plastificadas contribuyen a coaccionar las libres deformaciones plásticas que, en la teoría de rotura, se suponen. En términos de trabajo y energía, se introduce en el balance del trabajo de las fuerzas externas y en el de la energía de deformación, aquella parte del sistema que no ha plastificado. Establecido así el balance energético como potencial del sistema es cuando la condición de estacionariedad del mismo hace determinados los campos de desplazamientos y, por tanto, el de las deformaciones plásticas también. En definitiva, se trata de un modo de verificar si la ductilidad de los diseños previstos es suficiente, y en qué medida, para verificar el estado de rotura previsto, para unas determinadas cargas impuestas. Dentro del desarrollo teórico del problema, se encuentran ciertas precisiones importantes. Entre ellas, la verificación de que el estado de rotura a que se llega de manera determinada mediante el balance energético elasto-plástico satisface las condiciones de la solución de rotura que los teoremas de carga última predicen, asegurando, por tanto, que la solución determinada -unicidad del problema elásticocoincide con el teorema de unicidad de la carga de rotura, acotando además cuál es el sistema de equilibrio y cuál es la deformada de colapso, aspectos que los teoremas de rotura no pueden asegurar, sino sólo el valor de la carga última a verificar. Otra precisión se basa en la particularidad de los casos en que el sistema presenta una superficie de rotura plana, haciendo infinitas las posibilidades de equilibrio para una misma deformada de colapso determinada, lo que está en la base de, aparentemente, poder plastificar a antojo en vigas y arcos. Desde el planteamiento anterior, se encuentra entonces que existe una condición inherente a cualquier sistema, definidas unas leyes constitutivas internas, que permite al mismo llegar al inicio del estado de rotura sin demandar deformación plástica alguna, produciéndose la plastificación simultánea de todas las regiones que hayan llegado a su solicitación de rotura. En cierto modo, se daría un colapso de apariencia frágil. En tal caso, el sistema conserva plenamente hasta el final su capacidad dúctil y tal estado actúa como representante canónico de cualquier otra solución de equilibrio que con idéntico criterio de diseño interno se prevea para tal estructura. En la medida que el diseño se acerque o aleje de la solución canónica, la demanda de ductilidad del sistema para verificar la carga última será menor o mayor. Las soluciones que se aparten en exceso de la solución canónica, no verificarán el estado de rotura previsto por falta de ductilidad: la demanda de deformación plástica de alguna región plastificada estará más allá de la capacidad de la misma, revelándose una carga de rotura por falta de ductilidad menor que la que se preveía por mero equilibrio. Para la determinación de las deformaciones plásticas de las rótulas, se ha tomado un modelo formulado mediante el Método de los Elementos de Contorno, que proporciona un campo continuo de desplazamientos -y, por ende, de deformaciones y de tensiones- incluso en presencia de fisuras en el contorno. Importante cuestión es que se formula la diferencia, nada desdeñable, de la capacidad de rotación plástica de las secciones de hormigón armado en presencia de cortante y en su ausencia. Para las rótulas de fábrica, la diferencia se establece para las condiciones de la excentricidad -asociadas al valor relativo de la compresión-, donde las diferencias entres las regiones plastificadas con esfuerzo normal relativo alto o bajo son reseñables. Por otro lado, si bien de manera un tanto secundaria, las condiciones de servicio también imponen un límite al diseño previo en carga última deseado. La plastificación lleva asociadas deformaciones considerables, sean locales como globales. Tal cosa impone que, en estado de servicio, si la plastificación de alguna región lleva asociadas fisuraciones excesivas para el ambiente del entorno, la solución sea inviable por ello. Asimismo, las deformaciones de las estructuras suponen un límite severo a las posibilidades de su diseño. Especialmente en edificación, las deformaciones activas son un factor crítico a la hora de decidirse por una u otra solución. Por tanto, al límite que se impone por razón de ductilidad, se debe añadir el que se imponga por razón de las condiciones de servicio. Del modo anterior, considerando las condiciones de ductilidad y de servicio en cada caso, se puede tasar cada decisión de diseño con la previsión de cuáles serán las consecuencias en su estado de carga última y de servicio. Es decir, conocidos los límites, podemos acotar cuáles son los diseños a priori que podrán satisfacer seguro las condiciones de ductilidad y de servicio previstas, y en qué medida. Y, en caso de no poderse satisfacer, qué correcciones debieran realizarse sobre el diseño previo para poderlas cumplir. Por último, de las consecuencias que se extraen de lo estudiado, se proponen ciertas líneas de estudio y de experimentación para poder llegar a completar o expandir de manera práctica los resultados obtenidos. ABSTRACT This work deals with a main issue for the ultimate load design in reinforced concrete and masonry structures: the actual possibility that needed yield strains to reach a ultimate state could be reached by yielded regions on the structure that should develop their ultimate capacity to fulfill such a state. Thus, some statically determined design decisions are posed as a start for prescribed ultimate loads to be counteracted, but finding out the determined value of the strains needed to reach the ultimate load state. Therefore, ultimate load theorems are not taken as they are, but a full elasto-plastic formulation point of view is used. As a result, the path the structure must develop in a monotonus increasing loading procedure is not neglected, leading to the fact that non yielded regions will restrict the supposed totally free yield strains under a pure ultimate load theory. In work and energy terms, in the overall account of external forces work and internal strain energy, those domains in the body not reaching their ultimate state are considered. Once thus established the energy balance of the system as its potential, by imposing on it the stationary condition, both displacements and yield strains appear as determined values. Consequently, what proposed is a means for verifying whether the ductility of prescribed designs is enough and the extent to which they are so, for known imposed loads. On the way for the theoretical development of the proposal, some important aspects have been found. Among these, the verification that the conditions for the ultimate state reached under the elastoplastic energy balance fulfills the conditions prescribed for the ultimate load state predicted through the ultimate load theorems, assuring, therefore, that the determinate solution -unicity of the elastic problemcoincides with the unicity ultimate load theorem, determining as well which equilibrium system and which collapse shape are linked to it, being these two last aspects unaffordable by the ultimate load theorems, that make sure only which is the value of the ultimate load leading to collapse. Another aspect is based on the particular case in which the yield surface of the system is flat -i.e. expressed under a linear expression-, turning out infinite the equilibrium possibilities for one determined collapse shape, which is the basis of, apparently, deciding at own free will the yield distribution in beams and arches. From the foresaid approach, is then found that there is an inherent condition in any system, once defined internal constitutive laws, which allows it arrive at the beginning of the ultimate state or collapse without any yield strain demand, reaching the collapse simultaneously for all regions that have come to their ultimate strength. In a certain way, it would appear to be a fragile collapse. In such a case case, the system fully keeps until the end its ductility, and such a state acts as a canonical representative of any other statically determined solution having the same internal design criteria that could be posed for the that same structure. The extent to which a design is closer to or farther from the canonical solution, the ductility demand of the system to verify the ultimate load will be higher or lower. The solutions being far in excess from the canonical solution, will not verify the ultimate state due to lack of ductility: the demand for yield strains of any yielded region will be beyond its capacity, and a shortcoming ultimate load by lack of ductility will appear, lower than the expected by mere equilibrium. For determining the yield strains of plastic hinges, a Boundary Element Method based model has been used, leading to a continuous displacement field -therefore, for strains and stresses as well- even if cracks on the boundary are present. An important aspect is that a remarkable difference is found in the rotation capacity between plastic hinges in reinforced concrete with or without shear. For masonry hinges, such difference appears when dealing with the eccentricity of axial forces -related to their relative value of compression- on the section, where differences between yield regions under high or low relative compressions are remarkable. On the other hand, although in a certain secondary manner, serviceability conditions impose limits to the previous ultimate load stated wanted too. Yield means always big strains and deformations, locally and globally. Such a thing imposes, for serviceability states, that if a yielded region is associated with too large cracking for the environmental conditions, the predicted design will be unsuitable due to this. Furthermore, displacements must be restricted under certain severe limits that restrain the possibilities for a free design. Especially in building structures, active displacements are a critical factor when chosing one or another solution. Then, to the limits due to ductility reasons, other limits dealing with serviceability conditions shoud be added. In the foresaid way, both considering ductility and serviceability conditions in every case, the results for ultimate load and serviceability to which every design decision will lead can be bounded. This means that, once the limits are known, it is possible to bound which a priori designs will fulfill for sure the prescribed ductility and serviceability conditions, and the extent to wich they will be fulfilled, And, in case they were not, which corrections must be performed in the previous design so that it will. Finally, from the consequences derived through what studied, several study and experimental fields are proposed, in order to achieve a completeness and practical expansion of the obtained results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent experimental data on the conductivity σ+(T), T → 0, on the metallic side of the metal–insulator transition in ideally random (neutron transmutation-doped) 70Ge:Ga have shown that σ+(0) ∝ (N − Nc)μ with μ = ½, confirming earlier ultra-low-temperature results for Si:P. This value is inconsistent with theoretical predictions based on diffusive classical scaling models, but it can be understood by a quantum-directed percolative filamentary amplitude model in which electronic basis states exist which have a well-defined momentum parallel but not normal to the applied electric field. The model, which is based on a new kind of broken symmetry, also explains the anomalous sign reversal of the derivative of the temperature dependence in the critical regime.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Reforma Psiquiátrica, atual política de saúde mental, redireciona os recursos da assistência psiquiátrica para o modelo de base comunitária, substituindo o modelo asilar. A abordagem proposta pela Reforma Psiquiátrica procura conjugar o esforço teórico e prático para a construção da Rede de Atenção Psicossocial. O presente trabalho objetivou desvelar concepções e práticas de trabalhadores da saúde mental, construídas na práxis de suas trajetórias profissionais e contextos de vida, em relação à incorporação do modelo de atenção psicossocial ou manutenção de princípios asilares, caracterizadores da tradicional prática profissional em saúde mental. Objetivou também identificar pontos de tensão, que caracterizam interesses de diferentes naturezas, como obstáculos e desafios à implementação da Reforma Psiquiátrica. A pesquisa, de natureza qualitativa, contou com 10 entrevistas de profissionais atuando na área, baseada na técnica de depoimento oral e em roteiro do tipo temático, sendo 3 enfermeiros, 3 psicólogos, 3 psiquiatras e 1 terapeuta ocupacional. Os relatos dos profissionais foram organizados em categorias gerais e específicas tendo em vista a interpretação das narrativas à luz da literatura especializada. Através dos discursos dos profissionais do campo da saúde mental é possível observar que um tensionamento ideológico marca fortemente o espaço da saúde. Alguns profissionais relataram a busca por construir práticas em equipe interdisciplinar, pautadas pelo modelo psicossocial; porém, referem à resistência de outros profissionais da equipe. Praticamente todos os profissionais apresentam discursos de humanização no campo da saúde mental, mas alguns não enunciam visões críticas aos modelos asilares. Alguns trabalhadores revelam a crença na possibilidade de coexistência integrada entre o Modo Asilar e Modo Psicossocial. Para estes trabalhadores de CAPS, é desejável a permanência dos hospitais psiquiátricos e é possível a humanização dos mesmos. Essa questão indica, ao que parece, que as práticas em saúde mental ainda operam sobre premissas epistemológicas diferenciando sujeitos que podem ou não circular no meio social. A existência dos hospitais psiquiátricos, considerados como instituições totais, é problematizada e questionada pela Luta Antimanicomial, indica a permanência da lógica asilar que respalda a continuidade dos hospitais, exclusivamente psiquiátricos, entre os serviços de atendimento, com o apoio de parte dos profissionais da rede de saúde mental. Concordantes com a possibilidade de coexistência do modelo asilar e modelo psicossocial, estes profissionais permitem-nos demonstrar que mesmo uma visão clínica pretensamente humanizadora, que defenda em seu discurso um tratamento digno, pode operar no modelo teórico-metodológico positivista e não está necessariamente vinculada a uma postura política de sujeitos de direitos e de cidadania. Os profissionais que apresentaram em suas narrativas a não concordância com a permanência dos hospitais psiquiátricos, defendem que as transformações sejam clínicas e políticas nos saberes e nas práticas em Saúde Mental. Estes trabalhadores já fizeram ou fazem parte de movimentos sociais, apontados como lugares de reflexão crítica sobre ideias instituídas contribuindo, ao que parece, para o processo de desnaturalização de concepções construídas culturalmente e orientadoras de práticas profissionais. Diante de tais constatações podemos indagar e refletir se a desinstitucionalização, concreta e simbólica, encontra-se no horizonte de uma política pública de atenção em Saúde Mental que realmente tenha como projeto a sua real implementação e se a permanência dos hospitais psiquiátricos e das comunidades terapêuticas estaria descaracterizando as propostas iniciais da construção da Atenção Psicossocial, considerando os interesses privados e a manutenção da lógica asilar, contrários aos princípios do SUS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a microcanonical Monte Carlo simulation of the site-diluted Potts model in three dimensions with eight internal states, partly carried out on the citizen supercomputer Ibercivis. Upon dilution, the pure model’s first-order transition becomes of the second order at a tricritical point. We compute accurately the critical exponents at the tricritical point. As expected from the Cardy-Jacobsen conjecture, they are compatible with their random field Ising model counterpart. The conclusion is further reinforced by comparison with older data for the Potts model with four states.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We address the electronic structure and magnetic properties of vacancies and voids both in graphene and graphene ribbons. By using a mean-field Hubbard model, we study the appearance of magnetic textures associated with removing a single atom (vacancy) and multiple adjacent atoms (voids) as well as the magnetic interactions between them. A simple set of rules, based on the Lieb theorem, link the atomic structure and the spatial arrangement of the defects to the emerging magnetic order. The total spin S of a given defect depends on its sublattice imbalance, but some defects with S=0 can still have local magnetic moments. The sublattice imbalance also determines whether the defects interact ferromagnetically or antiferromagnetically with one another and the range of these magnetic interactions is studied in some simple cases. We find that in semiconducting armchair ribbons and two-dimensional graphene without global sublattice imbalance, there is a maximum defect density above which local magnetization disappears. Interestingly, the electronic properties of semiconducting graphene ribbons with uncoupled local moments are very similar to those of diluted magnetic semiconductors, presenting giant Zeeman splitting.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Permeability of the ocean crust is one of the most crucial parameters for constraining submarine fluid flow systems. Active hydrothermal fields are dynamic areas where fluid flow strongly affects the geochemistry and biology of the surrounding environment. There have been few permeability measurements in these regions, especially in felsic-hosted hydrothermal systems. We present a data set of 38 permeability and porosity measurements from the PACMANUS hydrothermal field, an actively venting, felsic hydrothermal field in the eastern Manus Basin. Permeability was measured using a complex transient method on 2.54-cm minicores. Permeability varies greatly between the samples, spanning over five orders of magnitude. Permeability decreases with both depth and decreasing porosity. When the alteration intensity of individual samples is considered, relationships between depth and porosity and permeability become more clearly defined. For incompletely altered samples (defined as >5% fresh rock), permeability and porosity are constant with depth. For completely altered samples (defined as <5% fresh rock), permeability and porosity decrease with depth. On average, the permeability values from the PACMANUS hydrothermal field are greater than those in other submarine environments using similar core-scale laboratory measurements; the average permeability, 4.5 x 10-16 m**2, is two to four orders of magnitude greater than in other areas. Although the core-scale permeability is higher than in other seafloor environments, it is still too low to obtain the fluid velocities observed in the PACMANUS hydrothermal field based on simplified analytical calculations. It is likely that core-scale permeability measurements are not representative of bulk rock permeability of the hydrothermal system overall, and that the latter is predominantly fracture controlled.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We extended the petrographic and geochemical dataset for the recently discovered Transantarctic Mountain microtektites in order to check our previous claim that they are related to the Australasian strewn field. Based on color and composition, the 465 microtektites so far identified include two groups of transparent glass spheres less than ca. 800 µm in diameter: the most abundant pale-yellow, or normal, microtektites, and the rare pale-green, or high-Mg, microtektites. The major element composition of normal microtektites determined through electron microprobe analysis is characterized by high contents of silica (SiO2 = 71.5 ± 3.6 (1 sigma) wt%) and alumina (Al2O3 = 15.5 ± 2.2 (1 sigma) wt%), low total alkali element contents (0.50-1.85 wt%), and MgO abundances <6 wt%. The high-Mg microtektites have a distinctly higher MgO content >10 wt%. Transantarctic Mountain microtektites contain rare silica-rich (up to 93 wt% SiO2) glassy inclusions similar to those found in two Australasian microtektites analyzed here for comparison. These inclusions are interpreted as partially digested, lechatelierite-like inclusions typically found in tektites and microtektites. The major and trace element (by laser ablation - inductively coupled plasma - mass spectrometry) abundance pattern of the Transantarctic Mountain microtektites matches the average upper continental crust composition for most elements. Major deviations include a strong to moderate depletion in volatile elements including Pb, Zn, Na, K, Rb, Sr and Cs, as a likely result of severe volatile loss during the high temperature melting and vaporization of crustal target rocks. The normal and high-Mg Transantarctic Mountain microtektites have compositions similar to the most volatile-poor normal and high-Mg Australasian microtektites reported in the literature. Their very low H2O and B contents (by secondary ion mass spectrometry) of 85 ± 58 (1 sigma) ?g/g and 0.53 ± 0.21 ?g/g, respectively, evidence the extreme volatile loss characteristically observed in tektites. The Sr and Nd isotopic compositions of multigrain samples of Transantarctic Mountain microtektites are 87Sr/86Sr ~ 0.71629 and 143Nd/144Nd ~ 0.51209, and fall into the Australasian tektite compositional field. The Nd model age calculated with respect to the chondritic uniform reservoir (CHUR) is TNdCHUR ~ 1.1 Ga, indicating a Meso-Proterozoic crustal source rock, as was derived for Australasian tektites as well. Coupled with the Quaternary age from the literature, the extended dataset presented in this work strengthens our previous conclusion that Transantarctic Mountain microtektites represent a major southward extension of the Australasian tektite/microtektite strewn field. Furthermore, the significant depletion in volatile elements (i.e., Pb, B, Na, K, Zn, Rb, Sr and Cs) of both normal and high-Mg Transantarctic Mountain microtektites relative to the Australasian ones provide us with further confirmation of a possible relationship between high temperature-time regimes in the microtektite-forming process and ejection distance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Previous research shows that correlations tend to increase in magnitude when individuals are aggregated across groups. This suggests that uncorrelated constellations of personality variables (such as the primary scales of Extraversion and Neuroticism) may display much higher correlations in aggregate factor analysis. We hypothesize and report that individual level factor analysis can be explained in terms of Giant Three (or Big Five) descriptions of personality, whereas aggregate level factor analysis can be explained in terms of Gray's physiological based model. Although alternative interpretations exist, aggregate level factor analysis may correctly identify the basis of an individual's personality as a result of better reliability of measures due to aggregation. We discuss the implications of this form of analysis in terms of construct validity, personality theory, and its applicability in general. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Queensland fruit fly, Bactrocera (Dacus) tryoni (QFF) is arguably the most costly horticultural insect pest in Australia. Despite this, no model is available to describe its population dynamics and aid in its management. This paper describes a cohort-based model of the population dynamics of the Queensland fruit fly. The model is primarily driven by weather variables, and so can be used at any location where appropriate meteorological data are available. In the model, the life cycle is divided into a number of discreet stages to allow physiological processes to be defined as accurately as possible. Eggs develop and hatch into larvae, which develop into pupae, which emerge as either teneral females or males. Both females and males can enter reproductive and over-wintering life stages, and there is a trapped male life stage to allow model predictions to be compared with trap catch data. All development rates are temperature-dependent. Daily mortality rates are temperature-dependent, but may also be influenced by moisture, density of larvae in fruit, fruit suitability, and age. Eggs, larvae and pupae all have constant establishment mortalities, causing a defined proportion of individuals to die upon entering that life stage. Transfer from one immature stage to the next is based on physiological age. In the adult life stages, transfer between stages may require additional and/or alternative functions. Maximum fecundity is 1400 eggs per female per day, and maximum daily oviposition rate is 80 eggs/female per day. The actual number of eggs laid by a female on any given day is restricted by temperature, density of larva in fruit, suitability of fruit for oviposition, and female activity. Activity of reproductive females and males, which affects reproduction and trapping, decreases with rainfall. Trapping of reproductive males is determined by activity, temperature and the proportion of males in the active population. Limitations of the model are discussed. Despite these, the model provides a useful agreement with trap catch data, and allows key areas for future research to be identified. These critical gaps in the current state of knowledge exist despite over 50 years of research on this key pest. By explicitly attempting to model the population dynamics of this pest we have clearly identified the research areas that must be addressed before progress can be made in developing the model into an operational tool for the management of Queensland fruit fly. (C) 2003 Published by Elsevier B.V.