869 resultados para Error-correcting codes (Information theory)
Resumo:
The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.
Resumo:
This thesis examines the literature on local home bias, i.e. investor preference towards geographically nearby stocks, and investigates the role of firm’s visibility, profitability, and opacity in explaining such behavior. While firm’s visibility is expected to proxy for the behavioral root originating such a preference, firm’s profitability and opacity are expected to capture the informational one. I find that less visible, and more profitable and opaque firms, conditionally to the demand, benefit from being headquartered in regions characterized by a scarcity of listed firms (local supply of stocks). Specifically, research estimates suggest that firms headquartered in regions with a poor supply of stocks would be worth i) 11 percent more if non-visible, non-profitable and non-opaque; ii) 16 percent more if profitable; and iii) 28 percent more if both profitable and opaque. Overall, as these features are able to explain most, albeit not all, of the local home bias effect, I reasonably argue and then assess that most of the preference for local is determined by a successful attempt to exploit local information advantage (60 percent), while the rest is determined by a mere (irrational) feeling of familiarity with the local firm (40 percent). Several and significant methodological, theoretical, and practical implications come out.
Resumo:
Recent developments in the theory of plasma-based collisionally excited x-ray lasers (XRL) have shown an optimization potential based on the dependence of the absorption region of the pumping laser on its angle of incidence on the plasma. For the experimental proof of this idea, a number of diagnostic schemes were developed, tested, qualified and applied. A high-resolution imaging system, yielding the keV emission profile perpendicular to the target surface, provided positions of the hottest plasma regions, interesting for the benchmarking of plasma simulation codes. The implementation of a highly efficient spectrometer for the plasma emission made it possible to gain information about the abundance of the ionization states necessary for the laser action in the plasma. The intensity distribution and deflection angle of the pump laser beam could be imaged for single XRL shots, giving access to its refraction process within the plasma. During a European collaboration campaign at the Lund Laser Center, Sweden, the optimization of the pumping laser incidence angle resulted in a reduction of the required pumping energy for a Ni-like Mo XRL, which enabled the operation at a repetition rate of 10 Hz. Using the experiences gained there, the XRL performance at the PHELIX facility, GSI Darmstadt with respect to achievable repetition rate and at wavelengths below 20 nm was significantly improved, and also important information for the development towards multi-100 eV plasma XRLs was acquired. Due to the setup improvements achieved during the work for this thesis, the PHELIX XRL system now has reached a degree of reproducibility and versatility which is sufficient for demanding applications like the XRL spectroscopy of heavy ions. In addition, a European research campaign, aiming towards plasma XRLs approaching the water-window (wavelengths below 5 nm) was initiated.
Resumo:
This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.
Resumo:
In this thesis we discuss a representation of quantum mechanics and quantum and statistical field theory based on a functional renormalization flow equation for the one-particle-irreducible average effective action, and we employ it to get information on some specific systems.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
Coupled-cluster theory in its single-reference formulation represents one of the most successful approaches in quantum chemistry for the description of atoms and molecules. To extend the applicability of single-reference coupled-cluster theory to systems with degenerate or near-degenerate electronic configurations, multireference coupled-cluster methods have been suggested. One of the most promising formulations of multireference coupled cluster theory is the state-specific variant suggested by Mukherjee and co-workers (Mk-MRCC). Unlike other multireference coupled-cluster approaches, Mk-MRCC is a size-extensive theory and results obtained so far indicate that it has the potential to develop to a standard tool for high-accuracy quantum-chemical treatments. This work deals with developments to overcome the limitations in the applicability of the Mk-MRCC method. Therefore, an efficient Mk-MRCC algorithm has been implemented in the CFOUR program package to perform energy calculations within the singles and doubles (Mk-MRCCSD) and singles, doubles, and triples (Mk-MRCCSDT) approximations. This implementation exploits the special structure of the Mk-MRCC working equations that allows to adapt existing efficient single-reference coupled-cluster codes. The algorithm has the correct computational scaling of d*N^6 for Mk-MRCCSD and d*N^8 for Mk-MRCCSDT, where N denotes the system size and d the number of reference determinants. For the determination of molecular properties as the equilibrium geometry, the theory of analytic first derivatives of the energy for the Mk-MRCC method has been developed using a Lagrange formalism. The Mk-MRCC gradients within the CCSD and CCSDT approximation have been implemented and their applicability has been demonstrated for various compounds such as 2,6-pyridyne, the 2,6-pyridyne cation, m-benzyne, ozone and cyclobutadiene. The development of analytic gradients for Mk-MRCC offers the possibility of routinely locating minima and transition states on the potential energy surface. It can be considered as a key step towards routine investigation of multireference systems and calculation of their properties. As the full inclusion of triple excitations in Mk-MRCC energy calculations is computational demanding, a parallel implementation is presented in order to circumvent limitations due to the required execution time. The proposed scheme is based on the adaption of a highly efficient serial Mk-MRCCSDT code by parallelizing the time-determining steps. A first application to 2,6-pyridyne is presented to demonstrate the efficiency of the current implementation.
Resumo:
This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.
Resumo:
Chapter 1 studies how consumers’ switching costs affect the pricing and profits of firms competing in two-sided markets such as Apple and Google in the smartphone market. When two-sided markets are dynamic – rather than merely static – I show that switching costs lower the first-period price if network externalities are strong, which is in contrast to what has been found in one-sided markets. By contrast, switching costs soften price competition in the initial period if network externalities are weak and consumers are more patient than the platforms. Moreover, an increase in switching costs on one side decreases the first-period price on the other side. Chapter 2 examines firms’ incentives to invest in local and flexible resources when demand is uncertain and correlated. I find that market power of the monopolist providing flexible resources distorts investment incentives, while competition mitigates them. The extent of improvement depends critically on demand correlation and the cost of capacity: under social optimum and monopoly, if the flexible resource is cheap, the relationship between investment and correlation is positive, and if it is costly, the relationship becomes negative; under duopoly, the relationship is positive. The analysis also sheds light on some policy discussions in markets such as cloud computing. Chapter 3 develops a theory of sequential investments in cybersecurity. The regulator can use safety standards and liability rules to increase security. I show that the joint use of an optimal standard and a full liability rule leads to underinvestment ex ante and overinvestment ex post. Instead, switching to a partial liability rule can correct the inefficiencies. This suggests that to improve security, the regulator should encourage not only firms, but also consumers to invest in security.
Resumo:
This thesis aims at investigating a new approach to document analysis based on the idea of structural patterns in XML vocabularies. My work is founded on the belief that authors do naturally converge to a reasonable use of markup languages and that extreme, yet valid instances are rare and limited. Actual documents, therefore, may be used to derive classes of elements (patterns) persisting across documents and distilling the conceptualization of the documents and their components, and may give ground for automatic tools and services that rely on no background information (such as schemas) at all. The central part of my work consists in introducing from the ground up a formal theory of eight structural patterns (with three sub-patterns) that are able to express the logical organization of any XML document, and verifying their identifiability in a number of different vocabularies. This model is characterized by and validated against three main dimensions: terseness (i.e. the ability to represent the structure of a document with a small number of objects and composition rules), coverage (i.e. the ability to capture any possible situation in any document) and expressiveness (i.e. the ability to make explicit the semantics of structures, relations and dependencies). An algorithm for the automatic recognition of structural patterns is then presented, together with an evaluation of the results of a test performed on a set of more than 1100 documents from eight very different vocabularies. This language-independent analysis confirms the ability of patterns to capture and summarize the guidelines used by the authors in their everyday practice. Finally, I present some systems that work directly on the pattern-based representation of documents. The ability of these tools to cover very different situations and contexts confirms the effectiveness of the model.
Resumo:
Bei der vorliegenden Studie wurde die Machbarkeit und Qualität der Arzneimittelverteilung von oralen Arzneimitteln in Einzeldosisblisterverpackungen je abgeteilte Arzneiform (EVA) untersucht.rnDie Studie wurde als offene, vergleichende, prospektive und multizentrische Patientenstudie durchgeführt. Als Studienmedikation standen Diovan®, CoDiovan® und Amlodipin in der EVA-Verpackung zur Verfügung. Die Verteilfehlerrate in der EVA- und Kontroll-Gruppe stellte den primären Zielparameter dar. Das Patientenwissen, die Patientenzufriedenheit und die Praktikabilität des EVA-Systems, sowie die Zufriedenheit der Pflegekräfte wurden mithilfe von Fragebogen evaluiert. Insgesamt wurden 2070 gültige Tablettenvergaben bei 332 Patienten in sechs verschiedenen Krankenhäusern geprüft. Es wurde in der EVA-Gruppe ein Verteilungsfehler von 1,8% und in der Kontroll-Gruppe von 0,7% ermittelt. Bei den Patienten-Fragebogen konnten insgesamt 292 Fragebogen ausgewertet werden. Die Ergebnisse zeigten einen ungenügenden Informationsstand der Patienten über ihre aktuellen, oralen Arzneimittel. In den 80 ausgefüllten Pflegekräfte-Fragebogen gaben über 80% an, dass Fehler beim Richten durch das EVA-System besser erkannt werden können. rnZusammenfassend kann gesagt werden, dass die erhöhte Fehlerrate in der EVA-Gruppe im Vergleich zur Kontroll-Gruppe durch mehrere Störfaktoren bedingt wurde. Grundsätzlich konnte eine sehr positive Resonanz auf das EVA-System bei den Patienten und den Pflegekräften beobachtet werden. rn
Resumo:
Plasmonic nanoparticles exhibit strong light scattering efficiency due to the oscillations of their conductive electrons (plasmon), which are excited by light. For rod-shaped nanoparticles, the resonance position is highly tunable by the aspect ratio (length/width) and the sensitivity to changes in the refractive index in the local environment depends on their diameter, hence, their volume. Therefore, rod-shaped nanoparticles are highly suitable as plasmonic sensors.rnWithin this thesis, I study the formation of gold nanorods and nanorods from a gold-copper alloy using a combination of small-angle X-ray scattering and optical extinction spectroscopy. The latter represents one of the first metal alloy nanoparticle synthesis protocols for producing rod-shaped single crystalline gold-copper (AuxCu(1-x)) alloyed nanoparticles. I find that both length and width independently follow an exponential growth behavior with different time-constants, which intrinsically leads to a switch between positive and negative aspect ratio growth during the course of the synthesis. In a parameter study, I find linear relations for the rate constants as a function of [HAuCl4]/[CTAB] ratio and [HAuCl4]/[seed] ratio. Furthermore, I find a correlation of final aspect ratio and ratio of rate constants for length and width growth rate for different [AgNO3]/[HAuCl4] ratios. I identify ascorbic acid as the yield limiting species in the reaction by the use of spectroscopic monitoring and TEM. Finally, I present the use of plasmonic nanorods that absorb light at 1064nm as contrast agents for photoacoustic imaging (BMBF project Polysound). rnIn the physics part, I present my automated dark-field microscope that is capable of collecting spectra in the range of 450nm to 1750 nm. I show the characteristics of that setup for the spectra acquisition in the UV-VIS range and how I use this information to simulate measurements. I show the major noise sources of the measurements and ways to reduce the noise and how the combination of setup charactersitics and simulations of sensitivity and sensing volume can be used to select appropriate gold rods for single unlabeled protein detection. Using my setup, I show how to estimate the size of gold nano-rods directly from the plasmon linewidth measured from optical single particle spectra. Then, I use this information to reduce the distribution (between particles) of the measured plasmonic sensitivity S by 30% by correcting for the systematic error introduced from the variation in particle size. I investigate the single particle scattering of bowtie structures — structures consisting of two (mostly) equilateral triangles pointing one tip at each other. I simulate the spectra of the structures considering the oblique illumination angle in my setup, which leads to additional plasmon modes in the spectra. The simulations agree well with the measurements form a qualitative point of view.rn
Resumo:
In this thesis, we develop high precision tools for the simulation of slepton pair production processes at hadron colliders and apply them to phenomenological studies at the LHC. Our approach is based on the POWHEG method for the matching of next-to-leading order results in perturbation theory to parton showers. We calculate matrix elements for slepton pair production and for the production of a slepton pair in association with a jet perturbatively at next-to-leading order in supersymmetric quantum chromodynamics. Both processes are subsequently implemented in the POWHEG BOX, a publicly available software tool that contains general parts of the POWHEG matching scheme. We investigate phenomenological consequences of our calculations in several setups that respect experimental exclusion limits for supersymmetric particles and provide precise predictions for slepton signatures at the LHC. The inclusion of QCD emissions in the partonic matrix elements allows for an accurate description of hard jets. Interfacing our codes to the multi-purpose Monte-Carlo event generator PYTHIA, we simulate parton showers and slepton decays in fully exclusive events. Advanced kinematical variables and specific search strategies are examined as means for slepton discovery in experimentally challenging setups.
Resumo:
Patients can make contributions to the safety of chemotherapy administration but little is known about their motivations to participate in safety-enhancing strategies. The theory of planned behavior was applied to analyze attitudes, norms, behavioral control, and chemotherapy patients' intentions to participate in medical error prevention.
Resumo:
A confocal imaging and image processing scheme is introduced to visualize and evaluate the spatial distribution of spectral information in tissue. The image data are recorded using a confocal laser-scanning microscope equipped with a detection unit that provides high spectral resolution. The processing scheme is based on spectral data, is less error-prone than intensity-based visualization and evaluation methods, and provides quantitative information on the composition of the sample. The method is tested and validated in the context of the development of dermal drug delivery systems, introducing a quantitative uptake indicator to compare the performances of different delivery systems is introduced. A drug penetration study was performed in vitro. The results show that the method is able to detect, visualize and measure spectral information in tissue. In the penetration study, uptake efficiencies of different experiment setups could be discriminated and quantitatively described. The developed uptake indicator is a step towards a quantitative assessment and, in a more general view apart from pharmaceutical research, provides valuable information on tissue composition. It can potentially be used for clinical in vitro and in vivo applications.