942 resultados para Current Density Mapping Method
Resumo:
Context: BL Lacs are the most numerous extragalactic objects which are detected in Very High Energy (VHE) gamma-rays band. They are a subclass of blazars. Large flux variability amplitude, sometimes happens in very short time scale, is a common characteristic of them. Significant optical polarization is another main characteristics of BL Lacs. BL Lacs' spectra have a continuous and featureless Spectral Energy Distribution (SED) which have two peaks. Among 1442 BL Lacs in the Roma-BZB catalogue, only 51 are detected in VHE gamma-rays band. BL Lacs are most numerous (more than 50% of 514 objects) objects among the sources that are detected above 10 GeV by FERMI-LAT. Therefore, many BL Lacs are expected to be discovered in VHE gamma-rays band. However, due to the limitation on current and near future technology of Imaging Air Cherenkov Telescope, astronomers are forced to predict whether an object emits VHE gamma-rays or not. Some VHE gamma-ray prediction methods are already introduced but still are not confirmed. Cross band correlations are the building blocks of introducing VHE gamma-rays prediction method. Aims: We will attempt to investigate cross band correlations between flux energy density, luminosity and spectral index of the sample. Also, we will check whether recently discovered MAGIC J2001+435 is a typical BL Lac. Methods: We select a sample of 42 TeV BL Lacs and collect 20 of their properties within five energy bands from literature and Tuorla blazar monitoring program database. All of the data are synchronized to be comparable to each other. Finally, we choose 55 pair of datasets for cross band correlations finding and investigating whether there is any correlation between each pair. For MAGIC J2001+435 we analyze the publicly available SWIFT-XRT data, and use the still unpublished VHE gamma-rays data from MAGIC collaboration. The results are compared to the other sources of the sample. Results: Low state luminosity of multiple detected VHE gamma-rays is strongly correlated luminosities in all other bands. However, the high state does not show such strong correlations. VHE gamma-rays single detected sources have similar behaviour to the low state of multiple detected ones. Finally, MAGIC J2001+435 is a typical TeV BL Lac. However, for some of the properties this source is located at the edge of the whole sample (e.g. in terms of X-rays flux). Keywords: BL Lac(s), Population study, Correlations finding, Multi wavelengths analysis, VHE gamma-rays, gamma-rays, X-rays, Optical, Radio
Resumo:
Parallel-connected photovoltaic inverters are required in large solar plants where it is not economically or technically reasonable to use a single inverter. Currently, parallel inverters require individual isolating transformers to cut the path for the circulating current. In this doctoral dissertation, the problem is approached by attempting to minimize the generated circulating current. The circulating current is a function of the generated common-mode voltages of the parallel inverters and can be minimized by synchronizing the inverters. The synchronization has previously been achieved by a communication link. However, in photovoltaic systems the inverters may be located far apart from each other. Thus, a control free of communication is desired. It is shown in this doctoral dissertation that the circulating current can also be obtained by a common-mode voltage measurement. A control method based on a short-time switching frequency transition is developed and tested with an actual photovoltaic environment of two parallel inverters connected to two 5 kW solar arrays. Controls based on the measurement of the circulating current and the common-mode voltage are generated and tested. A communication-free method of controlling the circulating current between parallelconnected inverters is developed and verified.
Resumo:
Familial hypercholesterolemia (FH) is a metabolic disorder inherited as an autosomal dominant trait characterized by an increased plasma low-density lipoprotein (LDL) level. The disease is caused by several different mutations in the LDL receptor gene. Although early identification of individuals carrying the defective gene could be useful in reducing the risk of atherosclerosis and myocardial infarction, the techniques available for determining the number of the functional LDL receptor molecules are difficult to carry out and expensive. Polymorphisms associated with this gene may be used for unequivocal diagnosis of FH in several populations. The aim of our study was to evaluate the genotype distribution and relative allele frequencies of three polymorphisms of the LDL receptor gene, HincII1773 (exon 12), AvaII (exon 13) and PvuII (intron 15), in 50 unrelated Brazilian individuals with a diagnosis of heterozygous FH and in 130 normolipidemic controls. Genomic DNA was extracted from blood leukocytes by a modified salting-out method. The polymorphisms were detected by PCR-RFLP. The FH subjects showed a higher frequency of A+A+ (AvaII), H+H+ (HincII1773) and P1P1 (PvuII) homozygous genotypes when compared to the control group (P<0.05). In addition, FH probands presented a high frequency of A+ (0.58), H+ (0.61) and P1 (0.78) alleles when compared to normolipidemic individuals (0.45, 0.45 and 0.64, respectively). The strong association observed between these alleles and FH suggests that AvaII, HincII1773 and PvuII polymorphisms could be useful to monitor the inheritance of FH in Brazilian families.
Resumo:
Digital business ecosystems (DBE) are becoming an increasingly popular concept for modelling and building distributed systems in heterogeneous, decentralized and open environments. Information- and communication technology (ICT) enabled business solutions have created an opportunity for automated business relations and transactions. The deployment of ICT in business-to-business (B2B) integration seeks to improve competitiveness by establishing real-time information and offering better information visibility to business ecosystem actors. The products, components and raw material flows in supply chains are traditionally studied in logistics research. In this study, we expand the research to cover the processes parallel to the service and information flows as information logistics integration. In this thesis, we show how better integration and automation of information flows enhance the speed of processes and, thus, provide cost savings and other benefits for organizations. Investments in DBE are intended to add value through business automation and are key decisions in building up information logistics integration. Business solutions that build on automation are important sources of value in networks that promote and support business relations and transactions. Value is created through improved productivity and effectiveness when new, more efficient collaboration methods are discovered and integrated into DBE. Organizations, business networks and collaborations, even with competitors, form DBE in which information logistics integration has a significant role as a value driver. However, traditional economic and computing theories do not focus on digital business ecosystems as a separate form of organization, and they do not provide conceptual frameworks that can be used to explore digital business ecosystems as value drivers—combined internal management and external coordination mechanisms for information logistics integration are not the current practice of a company’s strategic process. In this thesis, we have developed and tested a framework to explore the digital business ecosystems developed and a coordination model for digital business ecosystem integration; moreover, we have analysed the value of information logistics integration. The research is based on a case study and on mixed methods, in which we use the Delphi method and Internetbased tools for idea generation and development. We conducted many interviews with key experts, which we recoded, transcribed and coded to find success factors. Qualitative analyses were based on a Monte Carlo simulation, which sought cost savings, and Real Option Valuation, which sought an optimal investment program for the ecosystem level. This study provides valuable knowledge regarding information logistics integration by utilizing a suitable business process information model for collaboration. An information model is based on the business process scenarios and on detailed transactions for the mapping and automation of product, service and information flows. The research results illustrate the current cap of understanding information logistics integration in a digital business ecosystem. Based on success factors, we were able to illustrate how specific coordination mechanisms related to network management and orchestration could be designed. We also pointed out the potential of information logistics integration in value creation. With the help of global standardization experts, we utilized the design of the core information model for B2B integration. We built this quantitative analysis by using the Monte Carlo-based simulation model and the Real Option Value model. This research covers relevant new research disciplines, such as information logistics integration and digital business ecosystems, in which the current literature needs to be improved. This research was executed by high-level experts and managers responsible for global business network B2B integration. However, the research was dominated by one industry domain, and therefore a more comprehensive exploration should be undertaken to cover a larger population of business sectors. Based on this research, the new quantitative survey could provide new possibilities to examine information logistics integration in digital business ecosystems. The value activities indicate that further studies should continue, especially with regard to the collaboration issues on integration, focusing on a user-centric approach. We should better understand how real-time information supports customer value creation by imbedding the information into the lifetime value of products and services. The aim of this research was to build competitive advantage through B2B integration to support a real-time economy. For practitioners, this research created several tools and concepts to improve value activities, information logistics integration design and management and orchestration models. Based on the results, the companies were able to better understand the formulation of the digital business ecosystem and the importance of joint efforts in collaboration. However, the challenge of incorporating this new knowledge into strategic processes in a multi-stakeholder environment remains. This challenge has been noted, and new projects have been established in pursuit of a real-time economy.
Resumo:
Cross-sector collaboration and partnerships have become an emerging and desired strategy in addressing huge social and environmental challenges. Despite its popularity, cross-sector collaboration management has proven to be very challenging. Even though cross-sector collaboration and partnership management have been widely studied and discussed in recent years, their effectiveness as well as their ability to create value with respect to the problems they address has remained very challenging. There is little or no evidence of their ability to create value. Regarding all these challenges, this study aims to explore how to manage cross-sector collaborations and partnerships to be able to improve their effectiveness and to create more value for all partners involved in collaboration as well as for customers. The thesis is divided into two parts. The first part comprises an overview of relevant literature (including strategic management, value networks and value creation theories), followed by presenting the results of the whole thesis and the contribution made by the study. The second part consists of six research publications, including both quantitative and qualitative studies. The chosen research strategy is triangulation, as the study includes four types of triangulation: (1) theoretical triangulation, (2) methodological triangulation, (3) data triangulation and (4) researcher triangulation. Two publications represent conceptual development, which are based on secondary data research. One publication is a quantitative study, carried out through a survey. The other three publications represent qualitative studies, based on case studies, where data was collected through interviews and workshops, with participation of managers from all three sectors: public, private and the third (nonprofit). The study consolidates the field of “strategic management of value networks,” which is proposed to be applied in the context of cross-sector collaboration and partnerships, with the aim of increasing their effectiveness and the process of value creation. Furthermore, the study proposes a first definition for the strategic management of value networks. The study also proposes and develops two strategy tools that are recommended to be used for the strategic management of value networks in cross-sector collaboration and partnerships. Taking a step forward, the study implements the strategy tools in practice, aiming to show and to demonstrate how new value can be created by using the developed strategy tools for the strategic management of value networks. This study makes four main contributions. (1) First, it brings a theoretical contribution by providing new insights and consolidating the field of strategic management of value networks, also proposing a first definition for the strategic management of value networks. (2) Second, the study makes a methodical contribution by proposing and developing two strategy tools for value networks of cross-sector collaboration: (a) value network mapping, a method that allows us to assess the current and the potential value network and (b) the Value Network Scorecard, a method of performance measurement and performance prediction in cross-sector collaboration. (3) Third, the study has managerial implications, offering new solutions and empirical evidence on how to increase the effectiveness of cross-sector collaboration and also allow managers to understand how new value can be created in cross-sector partnerships and how to get the full potential of collaboration. (4) And fourth, the study also has practical implications, allowing managers to understand how to use in practice the strategy tools developed in this study, providing discussions on the limitations regarding the proposed tools as well as general limitations involved in the study.
Resumo:
Infarct-induced heart failure is usually associated with cardiac hypertrophy and decreased ß-adrenergic responsiveness. However, conflicting results have been reported concerning the density of L-type calcium current (I Ca(L)), and the mechanisms underlying the decreased ß-adrenergic inotropic response. We determined I Ca(L) density, cytoplasmic calcium ([Ca2+]i) transients, and the effects of ß-adrenergic stimulation (isoproterenol) in a model of postinfarction heart failure in rats. Left ventricular myocytes were obtained by enzymatic digestion 8-10 weeks after infarction. Electrophysiological recordings were obtained using the patch-clamp technique. [Ca2+]i transients were investigated via fura-2 fluorescence. ß-Adrenergic receptor density was determined by [³H]-dihydroalprenolol binding to left ventricle homogenates. Postinfarction myocytes showed a significant 25% reduction in mean I Ca(L) density (5.7 ± 0.28 vs 7.6 ± 0.32 pA/pF) and a 19% reduction in mean peak [Ca2+]i transients (0.13 ± 0.007 vs 0.16 ± 0.009) compared to sham myocytes. The isoproterenol-stimulated increase in I Ca(L) was significantly smaller in postinfarction myocytes (Emax: 63.6 ± 4.3 vs 123.3 ± 0.9% in sham myocytes), but EC50 was not altered. The isoproterenol-stimulated peak amplitude of [Ca2+]i transients was also blunted in postinfarction myocytes. Adenylate cyclase activation through forskolin produced similar I Ca(L) increases in both groups. ß-Adrenergic receptor density was significantly reduced in homogenates from infarcted hearts (Bmax: 93.89 ± 20.22 vs 271.5 ± 31.43 fmol/mg protein in sham myocytes), while Kd values were similar. We conclude that postinfarction myocytes from large infarcts display reduced I Ca(L) density and peak [Ca2+]i transients. The response to ß-adrenergic stimulation was also reduced and was probably related to ß-adrenergic receptor down-regulation and not to changes in adenylate cyclase activity.
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
Three recombinant antigens of Treponema pallidum Nichols strain were fused with GST, cloned and expressed in Escherichia coli, resulting in high levels of GST-rTp47 and GST-rTp17 expression, and supplementation with arginine tRNA for the AGR codon was needed to obtain GST-rTp15 overexpression. Purified fusion protein yields were 1.9, 1.7 and 5.3 mg/l of cell culture for GST-rTp47, GST-rTp17 and GST-rTp15, respectively. The identities of the antigens obtained were confirmed by automated DNA sequencing using ABI Prism 310 and peptide mapping by Finningan LC/MS. These recombinant antigens were evaluated by immuno-slot blot techniques applied to 137 serum samples from patients with a clinical and laboratory diagnosis of syphilis (61 samples), from healthy blood donors (50 samples), individuals with sexually transmitted disease other than syphilis (3 samples), and from individuals with other spirochetal diseases such as Lyme disease (20 samples) and leptospirosis (3 samples). The assay had sensitivity of 95.1% (95% CI, 86.1 to 98.7%) and a specificity of 94.7% (95% CI, 87.0 to 98.7%); a stronger reactivity was observed with fraction rTp17. The immunoreactivity results showed that fusion recombinant antigens based-immuno-slot blot techniques are suitable for use in diagnostic assays for syphilis.
Resumo:
The aim of this work is to invert the ionospheric electron density profile from Riometer (Relative Ionospheric opacity meter) measurement. The newly Riometer instrument KAIRA (Kilpisjärvi Atmospheric Imaging Receiver Array) is used to measure the cosmic HF radio noise absorption that taking place in the D-region ionosphere between 50 to 90 km. In order to invert the electron density profile synthetic data is used to feed the unknown parameter Neq using spline height method, which works by taking electron density profile at different altitude. Moreover, smoothing prior method also used to sample from the posterior distribution by truncating the prior covariance matrix. The smoothing profile approach makes the problem easier to find the posterior using MCMC (Markov Chain Monte Carlo) method.
Resumo:
Numerical simulation of plasma sources is very important. Such models allows to vary different plasma parameters with high degree of accuracy. Moreover, they allow to conduct measurements not disturbing system balance.Recently, the scientific and practical interest increased in so-called two-chamber plasma sources. In one of them (small or discharge chamber) an external power source is embedded. In that chamber plasma forms. In another (large or diffusion chamber) plasma exists due to the transport of particles and energy through the boundary between chambers.In this particular work two-chamber plasma sources with argon and oxygen as active mediums were onstructed. This models give interesting results in electric field profiles and, as a consequence, in density profiles of charged particles.
Resumo:
Electrical machine drives are the most electrical energy-consuming systems worldwide. The largest proportion of drives is found in industrial applications. There are, however many other applications that are also based on the use of electrical machines, because they have a relatively high efficiency, a low noise level, and do not produce local pollution. Electrical machines can be classified into several categories. One of the most commonly used electrical machine types (especially in the industry) is induction motors, also known as asynchronous machines. They have a mature production process and a robust rotor construction. However, in the world pursuing higher energy efficiency with reasonable investments not every application receives the advantage of using this type of motor drives. The main drawback of induction motors is the fact that they need slipcaused and thus loss-generating current in the rotor, and additional stator current for magnetic field production along with the torque-producing current. This can reduce the electric motor drive efficiency, especially in low-speed, low-power applications. Often, when high torque density is required together with low losses, it is desirable to apply permanent magnet technology, because in this case there is no need to use current to produce the basic excitation of the machine. This promotes the effectiveness of copper use in the stator, and further, there is no rotor current in these machines. Again, if permanent magnets with a high remanent flux density are used, the air gap flux density can be higher than in conventional induction motors. These advantages have raised the popularity of PMSMs in some challenging applications, such as hybrid electric vehicles (HEV), wind turbines, and home appliances. Usually, a correctly designed PMSM has a higher efficiency and consequently lower losses than its induction machine counterparts. Therefore, the use of these electrical machines reduces the energy consumption of the whole system to some extent, which can provide good motivation to apply permanent magnet technology to electrical machines. However, the cost of high performance rare earth permanent magnets in these machines may not be affordable in many industrial applications, because the tight competition between the manufacturers dictates the rules of low-cost and highly robust solutions, where asynchronous machines seem to be more feasible at the moment. Two main electromagnetic components of an electrical machine are the stator and the rotor. In the case of a conventional radial flux PMSM, the stator contains magnetic circuit lamination and stator winding, and the rotor consists of rotor steel (laminated or solid) and permanent magnets. The lamination itself does not significantly influence the total cost of the machine, even though it can considerably increase the construction complexity, as it requires a special assembly arrangement. However, thin metal sheet processing methods are very effective and economically feasible. Therefore, the cost of the machine is mainly affected by the stator winding and the permanent magnets. The work proposed in this doctoral dissertation comprises a description and analysis of two approaches of PMSM cost reduction: one on the rotor side and the other on the stator side. The first approach on the rotor side includes the use of low-cost and abundant ferrite magnets together with a tooth-coil winding topology and an outer rotor construction. The second approach on the stator side exploits the use of a modular stator structure instead of a monolithic one. PMSMs with the proposed structures were thoroughly analysed by finite element method based tools (FEM). It was found out that by implementing the described principles, some favourable characteristics of the machine (mainly concerning the machine size) will inevitable be compromised. However, the main target of the proposed approaches is not to compete with conventional rare earth PMSMs, but to reduce the price at which they can be implemented in industrial applications, keeping their dimensions at the same level or lower than those of a typical electrical machine used in the industry at the moment. The measurement results of the prototypes show that the main performance characteristics of these machines are at an acceptable level. It is shown that with certain specific actions it is possible to achieve a desirable efficiency level of the machine with the proposed cost reduction methods.
Resumo:
The association between socioeconomic position (SEP) and serum lipids has been little studied and the results have been controversial. A total of 2063 young adults born in 1978/79 were evaluated at 23-25 years of age in the fourth follow-up of a cohort study carried out in Ribeirão Preto, SP, Brazil, corresponding to 31.8% of the original sample. Total serum cholesterol (TC), triglycerides, high-density cholesterol (HDL cholesterol) and low-density cholesterol (LDL cholesterol) were analyzed according to SEP at birth and during young adulthood. SEP was classified into tertiles of family income and a cumulative score of socioeconomic disadvantage was created. TC was 11.85 mg/100 mL lower among men of lower SEP in childhood (P < 0.01) but no difference was found in women, whereas it was 8.46 lower among men (P < 0.01) and 8.21 lower among women of lower SEP in adulthood (P < 0.05). Individuals of lower SEP had lower LDL and HDL cholesterol, with small differences between sexes and between the two times in life. There was no association between SEP and triglyceride levels. After adjustment of income at one time point in relation to the other, some associations lost significance. The greater the socioeconomic disadvantage accumulated along life, the lower the levels of TC, LDL and HDL cholesterol (P < 0.05). The socioeconomic gradient of TC and LDL cholesterol was inverse, representing a lower cardiovascular risk for individuals of lower SEP, while the socioeconomic gradient of HDL cholesterol indicated a lower cardiovascular risk for individuals of higher SEP.