863 resultados para Modelling and rendering programs
Resumo:
A complete census of planetary systems around a volume-limited sample of solar-type stars (FGK dwarfs) in the Solar neighborhood (d a parts per thousand currency signaEuro parts per thousand 15 pc) with uniform sensitivity down to Earth-mass planets within their Habitable Zones out to several AUs would be a major milestone in extrasolar planets astrophysics. This fundamental goal can be achieved with a mission concept such as NEAT-the Nearby Earth Astrometric Telescope. NEAT is designed to carry out space-borne extremely-high-precision astrometric measurements at the 0.05 mu as (1 sigma) accuracy level, sufficient to detect dynamical effects due to orbiting planets of mass even lower than Earth's around the nearest stars. Such a survey mission would provide the actual planetary masses and the full orbital geometry for all the components of the detected planetary systems down to the Earth-mass limit. The NEAT performance limits can be achieved by carrying out differential astrometry between the targets and a set of suitable reference stars in the field. The NEAT instrument design consists of an off-axis parabola single-mirror telescope (D = 1 m), a detector with a large field of view located 40 m away from the telescope and made of 8 small movable CCDs located around a fixed central CCD, and an interferometric calibration system monitoring dynamical Young's fringes originating from metrology fibers located at the primary mirror. The mission profile is driven by the fact that the two main modules of the payload, the telescope and the focal plane, must be located 40 m away leading to the choice of a formation flying option as the reference mission, and of a deployable boom option as an alternative choice. The proposed mission architecture relies on the use of two satellites, of about 700 kg each, operating at L2 for 5 years, flying in formation and offering a capability of more than 20,000 reconfigurations. The two satellites will be launched in a stacked configuration using a Soyuz ST launch vehicle. The NEAT primary science program will encompass an astrometric survey of our 200 closest F-, G- and K-type stellar neighbors, with an average of 50 visits each distributed over the nominal mission duration. The main survey operation will use approximately 70% of the mission lifetime. The remaining 30% of NEAT observing time might be allocated, for example, to improve the characterization of the architecture of selected planetary systems around nearby targets of specific interest (low-mass stars, young stars, etc.) discovered by Gaia, ground-based high-precision radial-velocity surveys, and other programs. With its exquisite, surgical astrometric precision, NEAT holds the promise to provide the first thorough census for Earth-mass planets around stars in the immediate vicinity of our Sun.
Resumo:
Objective: To describe and analyze the teaching of the Integrated Management of hildhood Illness (IMCI) strategy on Brazilian undergraduate nursing programs. Method: Integrating an international multicentric study, a cross-sectional online survey was conducted between May and October 2010 with 571 undergraduate nursing programs in Brazil Results: Responses were received from 142 programs, 75% private and 25% public. 64% of them included the IMCI strategy in the theoretical content, and 50% of the programs included IMCI as part of the students’ practical experience. The locations most used for practical teaching were primary health care units. The ‘treatment’ module was taught by the fewest number of programs, and few programs had access to the IMCI instructional manuals. All programs used exams for evaluation, and private institutions were more likely to include class participation as part of the evaluation. Teaching staff in public institutions were more likely to have received training in teaching IMCI. Conclusion: In spite of the relevance of the IMCI strategy in care of the child, its content is not addressed in all undergraduate programs in Brazil, and many programs do not have access to the IMCI teaching manuals and have not provide training in IMCI to their teaching staff.
Resumo:
Parallel kinematic structures are considered very adequate architectures for positioning and orienti ng the tools of robotic mechanisms. However, developing dynamic models for this kind of systems is sometimes a difficult task. In fact, the direct application of traditional methods of robotics, for modelling and analysing such systems, usually does not lead to efficient and systematic algorithms. This work addre sses this issue: to present a modular approach to generate the dynamic model and through some convenient modifications, how we can make these methods more applicable to parallel structures as well. Kane’s formulati on to obtain the dynamic equations is shown to be one of the easiest ways to deal with redundant coordinates and kinematic constraints, so that a suitable c hoice of a set of coordinates allows the remaining of the modelling procedure to be computer aided. The advantages of this approach are discussed in the modelling of a 3-dof parallel asymmetric mechanisms.
Resumo:
Aerosol particles are likely important contributors to our future climate. Further, during recent years, effects on human health arising from emissions of particulate material have gained increasing attention. In order to quantify the effect of aerosols on both climate and human health we need to better quantify the interplay between sources and sinks of aerosol particle number and mass on large spatial scales. So far long-term, regional observations of aerosol properties have been scarce, but argued necessary in order to bring the knowledge of regional and global distribution of aerosols further. In this context, regional studies of aerosol properties and aerosol dynamics are truly important areas of investigation. This thesis is devoted to investigations of aerosol number size distribution observations performed through the course of one year encompassing observational data from five stations covering an area from southern parts of Sweden up to northern parts of Finland. This thesis tries to give a description of aerosol size distribution dynamics from both a quantitative and qualitative point of view. The thesis focuses on properties and changes in aerosol size distribution as a function of location, season, source area, transport pathways and links to various meteorological conditions. The investigations performed in this thesis show that although the basic behaviour of the aerosol number size distribution in terms of seasonal and diurnal characteristics is similar at all stations in the measurement network, the aerosol over the Nordic countries is characterised by a typically sharp gradient in aerosol number and mass. This gradient is argued to derive from geographical locations of the stations in relation to the dominant sources and transport pathways. It is clear that the source area significantly determine the aerosol size distribution properties, but it is obvious that transport condition in terms of frequency of precipitation and cloudiness in some cases even more strongly control the evolution of the number size distribution. Aerosol dynamic processes under clear sky transport are however likewise argued to be highly important. Southerly transport of marine air and northerly transport of air from continental sources is studied in detail under clear sky conditions by performing a pseudo-Lagrangian box model evaluation of the two type cases. Results from both modelling and observations suggest that nucleation events contribute to integral number increase during southerly transport of comparably clean marine air, while number depletion dominates the evolution of the size distribution during northerly transport. This difference is largely explained by different concentration of pre-existing aerosol surface associated with the two type cases. Mass is found to be accumulated in many of the individual transport cases studied. This mass increase was argued to be controlled by emission of organic compounds from the boreal forest. This puts the boreal forest in a central position for estimates of aerosol forcing on a regional scale.
Resumo:
This study has investigated the question of relation between literacy practices in and out of school in rural Tanzania. By using the perspective of linguistic anthropology, literacy practices in five villages in Karagwe district in the northwest of Tanzania have been analysed. The outcome may be used as a basis for educational planning and literacy programs. The analysis has revealed an intimate relation between language, literacy and power. In Karagwe, traditional élites have drawn on literacy to construct and reconstruct their authority, while new élites, such as individual women and some young people have been able to use literacy as one tool to get access to power. The study has also revealed a high level of bilingualism and a high emphasis on education in the area, which prove a potential for future education in the area. At the same time discontinuity in language use, mainly caused by stigmatisation of what is perceived as local and traditional, such as the mother-tongue of the majority of the children, and the high status accrued to all that is perceived as Western, has turned out to constitute a great obstacle for pupils’ learning. The use of ethnographic perspectives has enabled comparisons between interactional patterns in schools and outside school. This has revealed communicative patterns in school that hinder pupils’ learning, while the same patterns in other discourses reinforce learning. By using ethnography, relations between explicit and implicit language ideologies and their impact in educational contexts may be revealed. This knowledge may then be used to make educational plans and literacy programmes more relevant and efficient, not only in poor post-colonial settings such as Tanzania, but also elsewhere, such as in Western settings.
Resumo:
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.
Resumo:
[EN]We present advances of the meccano method [1,2] for tetrahedral mesh generation and volumetric parameterization of solids. The method combines several former procedures: a mapping from the meccano boundary to the solid surface, a 3-D local refinement algorithm and a simultaneous mesh untangling and smoothing. The key of the method lies in defining a one-to-one volumetric transformation between the parametric and physical domains. Results with adaptive finite elements will be shown for several engineering problems. In addition, the application of the method to T-spline modelling and isogeometric analysis [3,4] of complex geometries will be introduced…
Resumo:
Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.
Resumo:
The last decades have seen a large effort of the scientific community to study and understand the physics of sea ice. We currently have a wide - even though still not exhaustive - knowledge of the sea ice dynamics and thermodynamics and of their temporal and spatial variability. Sea ice biogeochemistry is instead largely unknown. Sea ice algae production may account for up to 25% of overall primary production in ice-covered waters of the Southern Ocean. However, the influence of physical factors, such as the location of ice formation, the role of snow cover and light availability on sea ice primary production is poorly understood. There are only sparse localized observations and little knowledge of the functioning of sea ice biogeochemistry at larger scales. Modelling becomes then an auxiliary tool to help qualifying and quantifying the role of sea ice biogeochemistry in the ocean dynamics. In this thesis, a novel approach is used for the modelling and coupling of sea ice biogeochemistry - and in particular its primary production - to sea ice physics. Previous attempts were based on the coupling of rather complex sea ice physical models to empirical or relatively simple biological or biogeochemical models. The focus is moved here to a more biologically-oriented point of view. A simple, however comprehensive, physical model of the sea ice thermodynamics (ESIM) was developed and coupled to a novel sea ice implementation (BFM-SI) of the Biogeochemical Flux Model (BFM). The BFM is a comprehensive model, largely used and validated in the open ocean environment and in regional seas. The physical model has been developed having in mind the biogeochemical properties of sea ice and the physical inputs required to model sea ice biogeochemistry. The central concept of the coupling is the modelling of the Biologically-Active-Layer (BAL), which is the time-varying fraction of sea ice that is continuously connected to the ocean via brines pockets and channels and it acts as rich habitat for many microorganisms. The physical model provides the key physical properties of the BAL (e.g., brines volume, temperature and salinity), and the BFM-SI simulates the physiological and ecological response of the biological community to the physical enviroment. The new biogeochemical model is also coupled to the pelagic BFM through the exchange of organic and inorganic matter at the boundaries between the two systems . This is done by computing the entrapment of matter and gases when sea ice grows and release to the ocean when sea ice melts to ensure mass conservation. The model was tested in different ice-covered regions of the world ocean to test the generality of the parameterizations. The focus was particularly on the regions of landfast ice, where primary production is generally large. The implementation of the BFM in sea ice and the coupling structure in General Circulation Models will add a new component to the latters (and in general to Earth System Models), which will be able to provide adequate estimate of the role and importance of sea ice biogeochemistry in the global carbon cycle.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
In this work the growth and the magnetic properties of the transition metals molybdenum, niobium, and iron and of the highly-magnetostrictive C15 Laves phases of the RFe2 compounds (R: Rare earth metals: here Tb, Dy, and Tb{0.3}Dy{0.7} deposited on alpha-Al2O3 (sapphire) substrates are analyzed. Next to (11-20) (a-plane) oriented sapphire substrates mainly (10-10) (m-plane) oriented substrates were used. These show a pronounced facetting after high temperature annealing in air. Atomic force microscopy (AFM) measurements reveal a dependence of the height, width, and angle of the facets with the annealing temperature. The observed deviations of the facet angles with respect to the theoretical values of the sapphire (10-1-2) and (10-11) surfaces are explained by cross section high resolution transmission electron microscopy (HR-TEM) measurements. These show the plain formation of the (10-11) surface while the second, energy reduced (10-1-2) facet has a curved shape given by atomic steps of (10-1-2) layers and is formed completely solely at the facet ridges and valleys. Thin films of Mo and Nb, respectively, deposited by means of molecular beam epitaxy (MBE) reveal a non-twinned, (211)-oriented epitaxial growth as well on non-faceted as on faceted sapphire m-plane, as was shown by X-Ray and TEM evaluations. In the case of faceted sapphire the two bcc crystals overgrow the facets homogeneously. Here, the bcc (111) surface is nearly parallel to the sapphire (10-11) facet and the Mo/Nb (100) surface is nearly parallel to the sapphire (10-1-2) surface. (211)-oriented Nb templates on sapphire m-plane can be used for the non-twinned, (211)-oriented growth of RFe2 films by means of MBE. Again, the quality of the RFe2 films grown on faceted sapphire is almost equal to films on the non-faceted substrate. For comparison thin RFe2 films of the established (110) and (111) orientation were prepared. Magnetic and magnetoelastic measurements performed in a self designed setup reveal a high quality of the samples. No difference between samples with undulated and flat morphology can be observed. In addition to the preparation of covering, undulating thin films on faceted sapphire m-plane nanoscopic structures of Nb and Fe were prepared by shallow incidence MBE. The formation of the nanostructures can be explained by a shadowing of the atomic beam due to the facets in addition to de-wetting effects of the metals on the heated sapphire surface. Accordingly, the nanostructures form at the facet ridges and overgrow them. The morphology of the structures can be varied by deposition conditions as was shown for Fe. The shape of the structures vary from pearl-necklet strung spherical nanodots with a diameter of a few 10 nm to oval nanodots of a few 100 nm length to continuous nanowires. Magnetization measurements reveal uniaxial magnetic anisotropy with the easy axis of magnetization parallel to the facet ridges. The shape of the hysteresis is depending on the morphology of the structures. The magnetization reversal processes of the spherical and oval nanodots were simulated by micromagnetic modelling and can be explained by the formation of magnetic vortices.
Resumo:
In the present study, the quaternary structures of Drosophila melanogaster hexamerin LSP-2 and Limulus polyphemus hemocyanin, both proteins from the hemocyanin superfamily, were elucidated to a 10 Å resolution with the technique of cryo-EM 3D-reconstruction. Furthermore, molecular modelling and rigid-body fitting allowed a detailed insight into the cryo-EM structures at atomic level. The results are summarised as follows: Hexamerin 1. The cryo-EM structure of Drosophila melanogaster hexamerin LSP-2 is the first quaternary structure of a protein from the group of the insect storage proteins. 2. The hexamerin LSP-2 is a hexamer of six bean-shaped subunits that occupy the corners of a trigonal antiprism, yielding a D3 (32) point-group symmetry. 3. Molecular modelling and rigid-body fitting of the hexamerin LSP-2 sequence showed a significant correlation between amino acid inserts in the primary structure and additional masses of the cryo-EM structure that are not present in the published quaternary structures of chelicerate and crustacean hemocyanins. 4. The cryo-EM structure of Drosophila melanogaster hexamerin LSP-2 confirms that the arthropod hexameric structure is applicable to insect storage proteins. Hemocyanin 1. The cryo-EM structure of the 8×6mer Limulus polyphemus hemocyanin is the highest resolved quaternary structure of an oligo-hexameric arthropod hemocyanin so far. 2. The hemocyanin is build of 48 bean-shaped subunits which are arranged in eight hexamers, yielding an 8×6mer with a D2 (222) point-group symmetry. The 'basic building blocks' are four 2×6mers that form two 4×6mers in an anti-parallel manner, latter aggregate 'face-to-face' to the 8×6mer. 3. The morphology of the 8×6mer was gauged and described very precisely on the basis of the cryo-EM structure. 4. Based on earlier topology studies of the eight different subunit types of Limulus polyphemus hemocyanin, eleven types of interhexamer interfaces have been identified that in the native 8×6mer sum up to 46 inter-hexamer bridges - 24 within the four 2×6mers, 10 to establish the two 4×6mers, and 12 to assemble the two 4×6mers into an 8×6mer. 5. Molecular modelling and rigid-body fitting of Limulus polyphemus and orthologous Erypelma californicum sequences allowed to assign very few amino acids to each of these interfaces. These amino acids now serve as candidates for the chemical bonds between the eight hexamers. 6. Most of the inter-hexamer contacts are conspicuously histidine-rich and evince constellations of amino acids that could constitute the basis for the allosteric interactions between the hexamers. 7. The cryo-EM structure of Limulus polyphemus hemocyanin opens the door to a fundamental understanding of the function of this highly cooperative protein.
Resumo:
The characteristics of aphasics’ speech in various languages have been the core of numerous studies, but Arabic in general, and Palestinian Arabic in particular, is still a virgin field in this respect. However, it is of vital importance to have a clear picture of the specific aspects of Palestinian Arabic that might be affected in the speech of aphasics in order to establish screening, diagnosis and therapy programs based on a clinical linguistic database. Hence the central questions of this study are what are the main neurolinguistic features of the Palestinian aphasics’ speech at the phonetic-acoustic level and to what extent are the results similar or not to those obtained from other languages. In general, this study is a survey of the most prominent features of Palestinian Broca’s aphasics’ speech. The main acoustic parameters of vowels and consonants are analysed such as vowel duration, formant frequency, Voice Onset Time (VOT), intensity and frication duration. The deviant patterns among the Broca’s aphasics are displayed and compared with those of normal speakers. The nature of deficit, whether phonetic or phonological, is also discussed. Moreover, the coarticulatory characteristics and some prosodic patterns of Broca’s aphasics are addressed. Samples were collected from six Broca’s aphasics from the same local region. The acoustic analysis conducted on a range of consonant and vowel parameters displayed differences between the speech patterns of Broca’s aphasics and normal speakers. For example, impairments in voicing contrast between the voiced and voiceless stops were found in Broca’s aphasics. This feature does not exist for the fricatives produced by the Palestinian Broca’s aphasics and hence deviates from data obtained for aphasics’ speech from other languages. The Palestinian Broca’s aphasics displayed particular problems with the emphatic sounds. They exhibited deviant coarticulation patterns, another feature that is inconsistent with data obtained from studies from other languages. However, several other findings are in accordance with those reported from various other languages such as impairments in the VOT. The results are in accordance with the suggestions that speech production deficits in Broca’s aphasics are not related to phoneme selection but rather to articulatory implementation and some speech output impairments are related to timing and planning deficits.
Resumo:
The main objective of this research is to improve the comprehension of the processes controlling the formation of caves and karst-like morphologies in quartz-rich lithologies (more than 90% quartz), like quartz-sandstones and metamorphic quartzites. In the scientific community the processes actually most retained to be responsible of these formations are explained in the “Arenisation Theory”. This implies a slow but pervasive dissolution of the quartz grain/mineral boundaries increasing the general porosity until the rock becomes incohesive and can be easily eroded by running waters. The loose sands produced by the weathering processes are then evacuated to the surface through processes of piping due to the infiltration of waters from the fracture network or the bedding planes. To deal with these problems we adopted a multidisciplinary approach through the exploration and the study of several cave systems in different tepuis. The first step was to build a theoretical model of the arenisation process, considering the most recent knowledge about the dissolution kinetics of quartz, the intergranular/grain boundaries diffusion processes, the primary diffusion porosity, in the simplified conditions of an open fracture crossed by a continuous flow of undersatured water. The results of the model were then compared with the world’s widest dataset (more than 150 analyses) of water geochemistry collected till now on the tepui, in superficial and cave settings. All these studies allowed verifying the importance and the effectiveness of the arenisation process that is confirmed to be the main process responsible of the primary formation of these caves and of the karst-like superficial morphologies. The numerical modelling and the field observations allowed evaluating a possible age of the cave systems around 20-30 million of years.