890 resultados para MC-VIEW
Resumo:
Aerosol particles deteriorate air quality, atmospheric visibility and our health. They affect the Earth s climate by absorbing and scattering sunlight, forming clouds, and also via several feed-back mechanisms. The net effect on the radiative balance is negative, i.e. cooling, which means that particles counteract the effect of greenhouse gases. However, particles are one of the poorly known pieces in the climate puzzle. Some of the airborne particles are natural, some anthropogenic; some enter the atmosphere in particle form, while others form by gas-to-particle conversion. Unless the sources and dynamical processes shaping the particle population are quantified, they cannot be incorporated into climate models. The molecular level understanding of new particle formation is still inadequate, mainly due to the lack of suitable measurement techniques to detect the smallest particles and their precursors. This thesis has contributed to our ability to measure newly formed particles. Three new condensation particle counter applications for measuring the concentration of nano-particles were developed. The suitability of the methods for detecting both charged and electrically neutral particles and molecular clusters as small as 1 nm in diameter was thoroughly tested both in laboratory and field conditions. It was shown that condensation particle counting has reached the size scale of individual molecules, and besides measuring the concentration they can be used for getting size information. In addition to atmospheric research, the particle counters could have various applications in other fields, especially in nanotechnology. Using the new instruments, the first continuous time series of neutral sub-3 nm particle concentrations were measured at two field sites, which represent two different kinds of environments: the boreal forest and the Atlantic coastline, both of which are known to be hot-spots for new particle formation. The contribution of ions to the total concentrations in this size range was estimated, and it could be concluded that the fraction of ions was usually minor, especially in boreal forest conditions. Since the ionization rate is connected to the amount of cosmic rays entering the atmosphere, the relative contribution of neutral to charged nucleation mechanisms extends beyond academic interest, and links the research directly to current climate debate.
Resumo:
Lactobacillus rhamnosus GG is a probiotic bacterium that is known worldwide. Since its discovery in 1985, the health effects and biology of this health-promoting strain have been researched at an increasing rate. However, knowledge of the molecular biology responsible for these health effects is limited, even though research in this area has continued to grow since the publication of the whole genome sequence of L. rhamnosus GG in 2009. In this thesis, the molecular biology of L. rhamnosus GG was explored by mapping the changes in protein levels in response to diverse stress factors and environmental conditions. The proteomics data were supplemented with transcriptome level mapping of gene expression. The harsh conditions of the gastro-intestinal tract, which involve acidic conditions and detergent-like bile acids, are a notable challenge to the survival of probiotic bacteria. To simulate these conditions, L. rhamnosus GG was exposed to a sudden bile stress, and several stress response mechanisms were revealed, among others various changes in the cell envelope properties. L. rhamnosus GG also responded in various ways to mild acid stress, which probiotic bacteria may face in dairy fermentations and product formulations. The acid stress response of L. rhamnosus GG included changes in central metabolism and specific responses related to the control of intracellular pH. Altogether, L. rhamnosus GG was shown to possess a large repertoire of mechanisms for responding to stress conditions, which is a beneficial character of a probiotic organism. Adaptation to different growth conditions was studied by comparing the proteome level responses of L. rhamnosus GG to divergent growth media and to different phases of growth. Comparing different growth phases revealed that the metabolism of L. rhamnosus GG is modified markedly during shift from the exponential to the stationary phase of growth. These changes were seen both at proteome and transcriptome levels and in various different cellular functions. When the growth of L. rhamnosus GG in a rich laboratory medium and in an industrial whey-based medium was compared, various differences in metabolism and in factors affecting the cell surface properties could be seen. These results led us to recommend that the industrial-type media should be used in laboratory studies of L. rhamnosus GG and other probiotic bacteria to achieve a similar physiological state for the bacteria as that found in industrial products, which would thus yield more relevant information about the bacteria. In addition, an interesting phenomenon of protein phosphorylation was observed in L. rhamnosus GG. Phosphorylation of several proteins of L. rhamnosus GG was detected, and there were hints that the degree of phosphorylation may be dependent on the growth pH.
Resumo:
Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
This letter presents a new class of variational wavefunctions for Fermi systems in any dimension. These wavefunctions introduce correlations between Cooper pairs in different momentum states and the relevant correlations can be computed analytically. At half filling we have a ground state with critical superconducting correlations, that causes negligible increase of the kinetic energy. We find large enhancements in a Cooper-pair correlation function caused purely by the interplay between the uncertainty principle, repulsion and the proximity of half filling. This is surprising since there is no accompanying signature in usual charge and spin response functions, and typifies a novel kind of many-body cooperative behaviour.
Resumo:
This paper presents a low cost but high resolution retinal image acquisition system of the human eye. The images acquired by a CMOS image sensor are communicated through the Universal Serial Bus (USB) interface to a personal computer for viewing and further processing. The image acquisition time was estimated to be 2.5 seconds. This system can also be used in telemedicine applications.
Resumo:
A new and efficient approach to construct a 3D wire-frame of an object from its orthographic projections is described. The input projections can be two or more and can include regular and complete auxiliary views. Each view may contain linear, circular and other conic sections. The output is a 3D wire-frame that is consistent with the input views. The approach can handle auxiliary views containing curved edges. This generality derives from a new technique to construct 3D vertices from the input 2D vertices (as opposed to matching coordinates that is prevalent in current art). 3D vertices are constructed by projecting the 2D vertices in a pair of views on the common line of the two views. The construction of 3D edges also does not require the addition of silhouette and tangential vertices and subsequently splitting edges in the views. The concepts of complete edges and n-tuples are introduced to obviate this need. Entities corresponding to the 3D edge in each view are first identified and the 3D edges are then constructed from the information available with the matching 2D edges. This allows the algorithm to handle conic sections that are not parallel to any of the viewing directions. The localization of effort in constructing 3D edges is the source of efficiency of the construction algorithm as it does not process all potential 3D edges. Working of the algorithm on typical drawings is illustrated. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We briefly review the growth and structural properties of View the MathML source bulk single crystals and View the MathML source epitaxial films grown on semi-insulating GaAs substrates. Temperature-dependent transport measurements on these samples are then correlated with the information obtained from structural (XRD, TEM, SEM) and optical (FTIR absorption) investigations. The temperature dependence of mobility and the Hall coefficient are theoretically modelled by exactly solving the linearized Boltzmann transport equation by inversion of the collision matrix and the relative role of various scattering mechanisms in limiting the low temperature and View the MathML source mobility is estimated. Finally, the first observation of Shubnikov oscillations in InAsSb is discussed.
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discoverymethods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies.Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discovery methods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
In recent years a number of white dwarfs have been observed with very high surface magnetic fields. We can expect that the magnetic field in the core of these stars would be much higher (similar to 10(14) G). In this paper, we analytically study the effect of high magnetic field on relativistic cold electron, and hence its effect on the stability and the mass-radius relation of a magnetic white dwarf. In strong magnetic fields, the equation of state of the Fermi gas is modified and Landau quantization comes into play. For relatively very high magnetic fields (with respect to the average energy density of matter) the number of Landau levels is restricted to one or two. We analyze the equation of states for magnetized electron degenerate gas analytically and attempt to understand the conditions in which transitions from the zeroth Landau level to first Landau level occurs. We also find the effect of the strong magnetic field on the star collapsing to a white dwarf, and the mass-radius relation of the resulting star. We obtain an interesting theoretical result that it is possible to have white dwarfs with mass more than the mass set by Chandrasekhar limit.
Resumo:
This work proposes a boosting-based transfer learning approach for head-pose classification from multiple, low-resolution views. Head-pose classification performance is adversely affected when the source (training) and target (test) data arise from different distributions (due to change in face appearance, lighting, etc). Under such conditions, we employ Xferboost, a Logitboost-based transfer learning framework that integrates knowledge from a few labeled target samples with the source model to effectively minimize misclassifications on the target data. Experiments confirm that the Xferboost framework can improve classification performance by up to 6%, when knowledge is transferred between the CLEAR and FBK four-view headpose datasets.
Resumo:
Multi-view head-pose estimation in low-resolution, dynamic scenes is difficult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring sufficient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is fixed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the different face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results confirm effectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.