935 resultados para Complex data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

While general equilibrium theories of trade stress the role of third-country effects, little work has been done in the empirical foreign direct investment (FDI) literature to test such spatial linkages. This paper aims to provide further insights into long-run determinants of Spanish FDI by considering not only bilateral but also spatially weighted third-country determinants. The few studies carried out so far have focused on FDI flows in a limited number of countries. However, Spanish FDI outflows have risen dramatically since 1995 and today account for a substantial part of global FDI. Therefore, we estimate recently developed Spatial Panel Data models by Maximum Likelihood (ML) procedures for Spanish outflows (1993-2004) to top-50 host countries. After controlling for unobservable effects, we find that spatial interdependence matters and provide evidence consistent with New Economic Geography (NEG) theories of agglomeration, mainly due to complex (vertical) FDI motivations. Spatial Error Models estimations also provide illuminating results regarding the transmission mechanism of shocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The heated debate over whether there is only a single mechanism or two mechanisms for morphology has diverted valuable research energy away from the more critical questions about the neural computations involved in the comprehension and production of morphologically complex forms. Cognitive neuroscience data implicate many brain areas. All extant models, whether they rely on a connectionist network or espouse two mechanisms, are too underspecified to explain why more than a few brain areas differ in their activity during the processing of regular and irregular forms. No one doubts that the brain treats regular and irregular words differently, but brain data indicate that a simplistic account will not do. It is time for us to search for the critical factors free from theoretical blinders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

FeBr2 has reacted with an equivalent of mnt2- (mnt = cis-1,2-dicyanoethylene-1,2-dithiolate) and the α-diimine L (L = 1,10'-phenantroline, 2,2'-bipyridine) in THF solution, and followed by adding of t-butyl-isocyanide to give [Fe(mnt)(L)(t-BuNC)2] neutral compound. The products were characterized by infrared, UV-visible and Mössbauer spectroscopy, besides thermogravimetric and conductivity data. The geometry in the equilibrium was calculated by the density functional theory and the electronic spectrum by the time-dependent. The experimental and theoretical results in good agreement have defined an octahedral geometry with two isocyanide neighbours. The π→π* intraligand electronic transition was not observed for cis-isomers in the near-IR spectral region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work studies the effect of NTMP (nitrilotris(methylenephosphonic acid)) on the adsorption of Cu(II), Zn(II), and Cd(II) onto boehmite in the pH range 5-9.5. The data were analyzed using the 2-pK constant capacitance model (CCM) assuming ternary surface complex formation. Under stoichiometric conditions, NTMP is more effective for removing Cu(II) than Zn(II) from solution and the contribution of ternary surface complexes are important to model the adsorption of both metals. Under nonstoichiometric conditions and high surface loading with a Me(II)/NTMP ratio of 1:5, Cu(II) and Zn(II) adsorption is significantly suppressed. In the case of Cd(II) the free metal adsorption is the most dominant species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Garlic viruses often occur in complex infections in nature. In this study, a garlic virus complex, collected in fields in Brazil, was purified. RT-PCR was performed using specific primers designed from the consensus regions of the coat protein genes of Onion yellow dwarf virus, a garlic strain (OYDV-G) and Leek yellow stripe virus (LYSV). cDNA of Garlic common latent virus (GCLV) was synthesized using oligo-dT and random primers. By these procedures individual garlic virus genomes were isolated and sequenced. The nucleotide sequence analysis associated with serological data reveals the presence of two Potyvirus OYDV-G and LYSV, and GCLV, a Carlavirus, simultaneously infecting garlic plants. Deduced amino acid sequences of the Brazilian isolates were compared with related viruses reported in different geographical regions of the world. The analysis showed closed relations considering the Brazilian isolates of OYDV-G and GCLV, and large divergence considering LYSV isolate. The detection of these virus species was confirmed by specific reactions observed when coat protein genes of the Brazilian isolates were used as probes in dot-blot and Southern blot hybridization assays. In field natural viral re-infection of virus-free garlic was evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After decades of mergers and acquisitions and successive technology trends such as CRM, ERP and DW, the data in enterprise systems is scattered and inconsistent. Global organizations face the challenge of addressing local uses of shared business entities, such as customer and material, and at the same time have a consistent, unique, and consolidate view of financial indicators. In addition, current enterprise systems do not accommodate the pace of organizational changes and immense efforts are required to maintain data. When it comes to systems integration, ERPs are considered “closed” and expensive. Data structures are complex and the “out-of-the-box” integration options offered are not based on industry standards. Therefore expensive and time-consuming projects are undertaken in order to have required data flowing according to business processes needs. Master Data Management (MDM) emerges as one discipline focused on ensuring long-term data consistency. Presented as a technology-enabled business discipline, it emphasizes business process and governance to model and maintain the data related to key business entities. There are immense technical and organizational challenges to accomplish the “single version of the truth” MDM mantra. Adding one central repository of master data might prove unfeasible in a few scenarios, thus an incremental approach is recommended, starting from areas most critically affected by data issues. This research aims at understanding the current literature on MDM and contrasting it with views from professionals. The data collected from interviews revealed details on the complexities of data structures and data management practices in global organizations, reinforcing the call for more in-depth research on organizational aspects of MDM. The most difficult piece of master data to manage is the “local” part, the attributes related to the sourcing and storing of materials in one particular warehouse in The Netherlands or a complex set of pricing rules for a subsidiary of a customer in Brazil. From a practical perspective, this research evaluates one MDM solution under development at a Finnish IT solution-provider. By means of applying an existing assessment method, the research attempts at providing the company with one possible tool to evaluate its product from a vendor-agnostics perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data is the most important asset of a company in the information age. Other assets, such as technology, facilities or products can be copied or reverse-engineered, employees can be brought over, but data remains unique to every company. As data management topics are slowly moving from unknown unknowns to known unknowns, tools to evaluate and manage data properly are developed and refined. Many projects are in progress today to develop various maturity models for evaluating information and data management practices. These maturity models come in many shapes and sizes: from short and concise ones meant for a quick assessment, to complex ones that call for an expert assessment by experienced consultants. In this paper several of them, made not only by external inter-organizational groups and authors, but also developed internally at a Major Energy Provider Company (MEPC) are juxtaposed and thoroughly analyzed. Apart from analyzing the available maturity models related to Data Management, this paper also selects the one with the most merit and describes and analyzes using it to perform a maturity assessment in MEPC. The utility of maturity models is two-fold: descriptive and prescriptive. Besides recording the current state of Data Management practices maturity by performing the assessments, this maturity model is also used to chart the way forward. Thus, after the current situation is presented, analysis and recommendations on how to improve it based on the definitions of higher levels of maturity are given. Generally, the main trend observed was the widening of the Data Management field to include more business and “soft” areas (as opposed to technical ones) and the change of focus towards business value of data, while assuming that the underlying IT systems for managing data are “ideal”, that is, left to the purely technical disciplines to design and maintain. This trend is not only present in Data Management but in other technological areas as well, where more and more attention is given to innovative use of technology, while acknowledging that the strategic importance of IT as such is diminishing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study combines several projects related to the flows in vessels with complex shapes representing different chemical apparata. Three major cases were studied. The first one is a two-phase plate reactor with a complex structure of intersecting micro channels engraved on one plate which is covered by another plain plate. The second case is a tubular microreactor, consisting of two subcases. The first subcase is a multi-channel two-component commercial micromixer (slit interdigital) used to mix two liquid reagents before they enter the reactor. The second subcase is a micro-tube, where the distribution of the heat generated by the reaction was studied. The third case is a conventionally packed column. However, flow, reactions or mass transfer were not modeled. Instead, the research focused on how to describe mathematically the realistic geometry of the column packing, which is rather random and can not be created using conventional computeraided design or engineering (CAD/CAE) methods. Several modeling approaches were used to describe the performance of the processes in the considered vessels. Computational fluid dynamics (CFD) was used to describe the details of the flow in the plate microreactor and micromixer. A space-averaged mass transfer model based on Fick’s law was used to describe the exchange of the species through the gas-liquid interface in the microreactor. This model utilized data, namely the values of the interfacial area, obtained by the corresponding CFD model. A common heat transfer model was used to find the heat distribution in the micro-tube. To generate the column packing, an additional multibody dynamic model was implemented. Auxiliary simulation was carried out to determine the position and orientation of every packing element in the column. This data was then exported into a CAD system to generate desirable geometry, which could further be used for CFD simulations. The results demonstrated that the CFD model of the microreactor could predict the flow pattern well enough and agreed with experiments. The mass transfer model allowed to estimate the mass transfer coefficient. Modeling for the second case showed that the flow in the micromixer and the heat transfer in the tube could be excluded from the larger model which describes the chemical kinetics in the reactor. Results of the third case demonstrated that the auxiliary simulation could successfully generate complex random packing not only for the column but also for other similar cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the largest genera of Orchidaceae in the Neotropics with about 450 species, Maxillaria presents several taxonomic uncertainties about its generic circumscription and the delimitation of species groups, mainly due to the large variability of some species. The present study aims at verifying the morphological variation and species delimitation in the Brasiliorchis picta complex, a recent new genus derived from Maxillaria, using morphometric multivariate analysis. A total of 340 specimens belonging to six species (B. chrysantha (Barb. Rodr.) R.B. Singer, S. Koehler & Carnevali, B. gracilis (Lodd.) R.B. Singer, S. Koehler & Carnevali, B. marginata (Lindl.) R.B. Singer, S. Koehler & Carnevali, B. picta (Hook.) R. Singer, S. Koehler & Carnevali, B. porphyrostele (Rchb. f.) R.B. Singer, S. Koehler & Carnevali and B. ubatubana (Hoehne) R.B. Singer, S. Koehler & Carnevali) were analyzed using multivariate methods (PCA, CVA, DA, and Cluster Analysis with UPGMA). B. gracilis shows the largest morphological discontinuity, mainly due to its smaller size. The other species tend to form distinct groups, but intermediate characteristics between pairs of species induce overlaps among the individuals of different species and thus confuse the distinction of each one. Hybridization and geographic distribution can be involved in the differentiation of the species and lineages in this complex. Because the species classified a priori in this work cannot be recognized by the quantitative characters measured here, such other tools as geometric morphometry and molecular data should be employed in future works to clarify species relationships in this complex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The photogeneration of nitric oxide (NO) using laser flash photolysis was investigated for S-nitroso-glutathione (GSNO) and S-nitroso-N-acetylcysteine (NacySNO) at pH 6.4 (PBS/HCl) and 7.4 (PBS). Irradiation of S-nitrosothiol with light (lambda = 355 nm followed by absorption spectroscopy) resulted in the homolytic decomposition of NacySNO and GSNO to generate radicals (GS· and NacyS·) and NO. The release of NO from donor compounds measured with an ISO-Nometer apparatus was larger at pH 7.4 than pH 6.4. NacySNO was also incorporated into dipalmitoyl-phosphatidylcholine liposomes in the presence and absence of zinc phthalocyanine (ZnPC), a well-known photosensitizer useful for photodynamic therapy. Liposomes are usually used as carriers for hydrophobic compounds such as ZnPC. Inclusion of ZnPC resulted in a decrease in NO liberation in liposomal medium. However, there was a synergistic action of both photosensitizers and S-nitrosothiols resulting in the formation of other reactive species such as peroxynitrite, which is a potent oxidizing agent. These data show that NO release depends on pH and the medium, as well as on the laser energy applied to the system. Changes in the absorption spectrum were monitored as a function of light exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Neurolinguistics, the use of diagnostic tests developed in other countries can create difficulties in the interpretation of results due to cultural, demographic and linguistic differences. In a country such as Brazil, with great social contrasts, schooling exerts a powerful influence on the abilities of normal individuals. The objective of the present study was to identify the influence of schooling on the performance of normal Brazilian individuals in the Boston Diagnostic Aphasia Examination (BDAE), in order to obtain reference values for the Brazilian population. We studied 107 normal subjects ranging in age from 15 to 84 years (mean ± SD = 47.2 ± 17.6 years), with educational level ranging from 1 to 24 years (9.9 ± 4.8 years). Subjects were compared for scores obtained in the 28 subtests of the BDAE after being divided into groups according to age (15 to 30, N = 24, 31 to 50, N = 33 and 51 years or more, N = 50) and education (1 to 4, N = 26, 5 to 8, N = 17 and 9 years or more, N = 61). Subjects with 4 years or less of education performed poorer in Word Discrimination, Visual Confrontation Naming, Reading of Sentences and Paragraphs, and Primer-Level Dictation (P < 0.05). When breakdown by schooling was 8 years or less, subjects performed poorer in all subtests (P < 0.05), except Responsive Naming, Word Recognition and Word-Picture Matching. The elderly performed poorer (P < 0.05) in Complex Ideational Material, Visual Confrontation Naming, Comprehension of Oral Spelling, Written Confrontation Naming, and Sentences to Dictation. We present the reference values for the cut-off scores according to educational level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.