77 resultados para complex data
Resumo:
Time-resolved studies of chlorosilylene, ClSiH, generated by the 193 nm laser flash photolysis of 1-chloro-1- silacyclopent-3-ene, have been carried out to obtain rate constants for its bimolecular reaction with trimethylsilane-1-d, Me3SiD, in the gas phase. The reaction was studied at total pressures up to 100 Torr (with and without added SF6) over the temperature range of 295−407 K. The rate constants were found to be pressure independent and gave the following Arrhenius equation: log[(k/(cm3 molecule−1 s−1)] = (−13.22 ± 0.15) + [(13.20 ± 1.00) kJ mol−1]/(RT ln 10). When compared with previously published kinetic data for the reaction of ClSiH with Me3SiH, kinetic isotope effects, kD/kH, in the range from 7.4 (297 K) to 6.4 (407 K) were obtained. These far exceed values of 0.4−0.5 estimated for a single-step insertion process. Quantum chemical calculations (G3MP2B3 level) confirm not only the involvement of an intermediate complex, but also the existence of a low-energy internal isomerization pathway which can scramble the D and H atom labels. By means of Rice−Ramsperger−Kassel−Marcus modeling and a necessary (but small) refinement of the energy surface, we have shown that this mechanism can reproduce closely the experimental isotope effects. These findings provide the first experimental evidence for the isomerization pathway and thereby offer the most concrete evidence to date for the existence of intermediate complexes in the insertion reactions of silylenes.
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.
Resumo:
We examined complex geographical patterns in the morphology of a kleptoparasitic spider, Argyrodes kumadai, across its distributional range in Japan. To disentangle biotic and abiotic factors underlying morphological variation, latitudinal trends were investigated in two traits, body size and relative leg length, across separate transition zones for host use and voltinism. Statistical analyses revealed complex sawtooth clines. Adult body size dramatically changed at the transition zones for host use and voltinism, and exhibited a latitudinal decline following the converse to Bergmann’s cline under the same host use and voltinism in both sexes. A similar pattern was observed for relative leg length in females but not in males. A genetic basis for a part of observed differences in morphology was supported by a common-garden experiment. Our data suggest that local adaptation to factors other than season length such as resource availability (here associated with host use) obscures underlying responses to latitude.
Resumo:
OBJECTIVES: The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. METHODS: To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. RESULTS: To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. CONCLUSIONS: Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.
Resumo:
A new tetranuclear complex, [Cu4L4](ClO4)4·2H2O (1), has been synthesized from the self-assembly of copper(II) perchlorate and the tridentate Schiff base ligand (2E,3E)-3-(2-aminopropylimino) butan-2-one oxime (HL). Single-crystal X-ray diffraction studies reveal that complex 1 consists of a Cu4(NO)4 core where the four copper(II) centers having square pyramidal environment are arranged in a distorted tetrahedral geometry. They are linked together by a rare bridging mode (μ3-η1,η2,η1) of oximato ligands. Analysis of magnetic susceptibility data indicates moderate antiferromagnetic (J1 = −48 cm−1, J2 = −40 cm−1 and J3 = −52 cm−1) exchange interaction through σ-superexchange pathways (in-plane bridging) of the oxime group. Theoretical calculations based on DFT technique have been used to obtain the energy states of different spin configurations and estimate the coupling constants and to understand the exact magnetic exchange pathways.
Resumo:
A tetranuclear Cu(II) complex [Cu4L4(H2O)4](ClO4)4 has been synthesized using the terdentate Schiff base 2-(pyridine-2-yliminomethyl)-phenol (HL) (the condensation product of salicylaldehyde and 2-aminopyridine) and copper perchlorate. Chemical characterizations such as IR and UV/Vis of the complex have been carried out. A single-crystal diffraction study shows that the complex contains a nearly planar tetranuclear core containing four copper atoms, which occupy four equivalent five-coordinate sites with a square pyramidal environment. Magnetic measurements have been carried out over the temperature range 2–300K and with 100Oe field strengths. Analysis of magnetic susceptibility data indicates a strong antiferromagnetic (J1=−638cm−1) exchange interaction between diphenoxo-bridged Cu(II) centers and a moderate antiferromagnetic (J2=−34cm−1) interaction between N–C–N bridged Cu(II) centers. Magnetic exchange interactions (J’s) are also discussed on the basis of a computational study using DFT methodology. The spin density distribution (singlet ground state) is calculated to visualize the effect of delocalization of spin density through bridging groups.
Resumo:
This paper summarizes and analyses available data on the surface energy balance of Arctic tundra and boreal forest. The complex interactions between ecosystems and their surface energy balance are also examined, including climatically induced shifts in ecosystem type that might amplify or reduce the effects of potential climatic change. High latitudes are characterized by large annual changes in solar input. Albedo decreases strongly from winter, when the surface is snow-covered, to summer, especially in nonforested regions such as Arctic tundra and boreal wetlands. Evapotranspiration (QE) of high-latitude ecosystems is less than from a freely evaporating surface and decreases late in the season, when soil moisture declines, indicating stomatal control over QE, particularly in evergreen forests. Evergreen conifer forests have a canopy conductance half that of deciduous forests and consequently lower QE and higher sensible heat flux (QH). There is a broad overlap in energy partitioning between Arctic and boreal ecosystems, although Arctic ecosystems and light taiga generally have higher ground heat flux because there is less leaf and stem area to shade the ground surface, and the thermal gradient from the surface to permafrost is steeper. Permafrost creates a strong heat sink in summer that reduces surface temperature and therefore heat flux to the atmosphere. Loss of permafrost would therefore amplify climatic warming. If warming caused an increase in productivity and leaf area, or fire caused a shift from evergreen to deciduous forest, this would increase QE and reduce QH. Potential future shifts in vegetation would have varying climate feedbacks, with largest effects caused by shifts from boreal conifer to shrubland or deciduous forest (or vice versa) and from Arctic coastal to wet tundra. An increase of logging activity in the boreal forests appears to reduce QE by roughly 50% with little change in QH, while the ground heat flux is strongly enhanced.
Resumo:
We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.
Resumo:
Objective: To describe the training undertaken by pharmacists employed in a pharmacist-led information technology-based intervention study to reduce medication errors in primary care (PINCER Trial), evaluate pharmacists’ assessment of the training, and the time implications of undertaking the training. Methods: Six pharmacists received training, which included training on root cause analysis and educational outreach, to enable them to deliver the PINCER Trial intervention. This was evaluated using self-report questionnaires at the end of each training session. The time taken to complete each session was recorded. Data from the evaluation forms were entered onto a Microsoft Excel spreadsheet, independently checked and the summary of results further verified. Frequencies were calculated for responses to the three-point Likert scale questions. Free-text comments from the evaluation forms and pharmacists’ diaries were analysed thematically. Key findings: All six pharmacists received 22 hours of training over five sessions. In four out of the five sessions, the pharmacists who completed an evaluation form (27 out of 30 were completed) stated they were satisfied or very satisfied with the various elements of the training package. Analysis of free-text comments and the pharmacists’ diaries showed that the principles of root cause analysis and educational outreach were viewed as useful tools to help pharmacists conduct pharmaceutical interventions in both the study and other pharmacy roles that they undertook. The opportunity to undertake role play was a valuable part of the training received. Conclusions: Findings presented in this paper suggest that providing the PINCER pharmacists with training in root cause analysis and educational outreach contributed to the successful delivery of PINCER interventions and could potentially be utilised by other pharmacists based in general practice to deliver pharmaceutical interventions to improve patient safety.
Resumo:
This chapter introduces the latest practices and technologies in the interactive interpretation of environmental data. With environmental data becoming ever larger, more diverse and more complex, there is a need for a new generation of tools that provides new capabilities over and above those of the standard workhorses of science. These new tools aid the scientist in discovering interesting new features (and also problems) in large datasets by allowing the data to be explored interactively using simple, intuitive graphical tools. In this way, new discoveries are made that are commonly missed by automated batch data processing. This chapter discusses the characteristics of environmental science data, common current practice in data analysis and the supporting tools and infrastructure. New approaches are introduced and illustrated from the points of view of both the end user and the underlying technology. We conclude by speculating as to future developments in the field and what must be achieved to fulfil this vision.
Resumo:
n the past decade, the analysis of data has faced the challenge of dealing with very large and complex datasets and the real-time generation of data. Technologies to store and access these complex and large datasets are in place. However, robust and scalable analysis technologies are needed to extract meaningful information from these datasets. The research field of Information Visualization and Visual Data Analytics addresses this need. Information visualization and data mining are often used complementary to each other. Their common goal is the extraction of meaningful information from complex and possibly large data. However, though data mining focuses on the usage of silicon hardware, visualization techniques also aim to access the powerful image-processing capabilities of the human brain. This article highlights the research on data visualization and visual analytics techniques. Furthermore, we highlight existing visual analytics techniques, systems, and applications including a perspective on the field from the chemical process industry.
Resumo:
Communication signal processing applications often involve complex-valued (CV) functional representations for signals and systems. CV artificial neural networks have been studied theoretically and applied widely in nonlinear signal and data processing [1–11]. Note that most artificial neural networks cannot be automatically extended from the real-valued (RV) domain to the CV domain because the resulting model would in general violate Cauchy-Riemann conditions, and this means that the training algorithms become unusable. A number of analytic functions were introduced for the fully CV multilayer perceptrons (MLP) [4]. A fully CV radial basis function (RBF) nework was introduced in [8] for regression and classification applications. Alternatively, the problem can be avoided by using two RV artificial neural networks, one processing the real part and the other processing the imaginary part of the CV signal/system. A even more challenging problem is the inverse of a CV
Resumo:
This work presents a model study for the formation of a dimeric dioxomolybdenum(VI) complex [MoO2L]2, generated by simultaneous satisfaction of acceptor and donor character existing in the corresponding monomeric Mo(VI) complex MoO2L. This mononuclear complex is specially designed to contain a coordinatively unsaturated Mo(VI) acceptor centre and a free donor group, (e.g. –NH2 group) strategically placed in the ligand skeleton [H2L = 2-hydroxyacetophenonehydrazone of 2-aminobenzoylhydrazine]. Apart from the dimer [MoO2L]2, complexes of the type MoO2L·B (where B = CH3OH, γ-picoline and imidazole) are also reported. All the complexes are characterized by elemental analysis, spectroscopic (UV–Vis, IR, 1H NMR) techniques and cyclic voltammetry. Single crystal X-ray structures of [MoO2L]2 (1), MoO2L·CH3OH (2), and MoO2L.(γ-pic) (3) have been determined and discussed. DFT calculation on these complexes corroborates experimental data and provides clue for the facile formation of this type of dimer not reported previously. The process of dimer formation may also be viewed as an interaction between two molecules of a specially designed complex acting as a monodentate ligand. This work is expected to open up a new field of design and synthesis of dimeric complexes through the process of symbiotic donor–acceptor (acid–base) interaction between two molecules of a specially designed monomer.
Resumo:
Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.
Resumo:
Single-carrier (SC) block transmission with frequency-domain equalisation (FDE) offers a viable transmission technology for combating the adverse effects of long dispersive channels encountered in high-rate broadband wireless communication systems. However, for high bandwidthefficiency and high power-efficiency systems, the channel can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such nonlinear Hammerstein channels, the standard SC-FDE scheme no longer works. This paper advocates a complex-valued (CV) B-spline neural network based nonlinear SC-FDE scheme for Hammerstein channels. Specifically, We model the nonlinear HPA, which represents the CV static nonlinearity of the Hammerstein channel, by a CV B-spline neural network, and we develop two efficient alternating least squares schemes for estimating the parameters of the Hammerstein channel, including both the channel impulse response coefficients and the parameters of the CV B-spline model. We also use another CV B-spline neural network to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse B-spline neural network model obtained in time domain. Extensive simulation results are included to demonstrate the effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels.