960 resultados para Perfect matches
Resumo:
We present a rigorous, regularization-independent local quantum field theoretic treatment of the Casimir effect for a quantum scalar field of mass mu not equal 0 which yields closed form expressions for the energy density and pressure. As an application we show that there exist special states of the quantum field in which the expectation value of the renormalized energy-momentum tensor is, for any fixed time, independent of the space coordinate and of the perfect fluid form g(mu,nu)rho with rho > 0, thus providing a concrete quantum field theoretic model of the cosmological constant. This rho represents the energy density associated to a state consisting of the vacuum and a certain number of excitations of zero momentum, i.e., the constituents correspond to lowest energy and pressure p <= 0. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The use of the spin of the electron as the ultimate logic bit-in what has been dubbed spintronics-can lead to a novel way of thinking about information flow. At the same time single-layer graphene has been the subject of intense research due to its potential application in nanoscale electronics. While defects can significantly alter the electronic properties of nanoscopic systems, the lack of control can lead to seemingly deleterious effects arising from the random arrangement of such impurities. Here we demonstrate, using ab initio density functional theory and non-equilibrium Green`s functions calculations, that it is possible to obtain perfect spin selectivity in doped graphene nanoribbons to produce a perfect spin filter. We show that initially unpolarized electrons entering the system give rise to 100% polarization of the current due to random disorder. This effect is explained in terms of different localization lengths for each spin channel which leads to a new mechanism for the spin filtering effect that is disorder-driven.
Resumo:
Assuming that nuclear matter can be treated as a perfect fluid, we study the propagation of perturbations in the baryon density at high temperature. The equation of state is derived from the non-linear Walecka model. The expansion of the Euler and continuity equations of relativistic hydrodynamics around equilibrium configurations lead to the breaking wave equation for the density perturbation. We solve it numerically for this perturbation and follow the propagation of the initial pulses.
Resumo:
We present a technique to build, within a dissipative bosonic network, decoherence-free channels (DFCs): a group of normal-mode oscillators with null effective damping rates. We verify that the states protected within the DFC define the well-known decoherence-free subspaces (DFSs) when mapped back into the natural network oscillators. Therefore, our technique to build protected normal-mode channels turns out to be an alternative way to build DFSs, which offers advantages over the conventional method. It enables the computation of all the network-protected states at once, as well as leading naturally to the concept of the decoherence quasi-free subspace (DQFS), inside which a superposition state is quasi-completely protected against decoherence. The concept of the DQFS, weaker than that of the DFS, may provide a more manageable mechanism to control decoherence. Finally, as an application of the DQFSs, we show how to build them for quasi-perfect state transfer in networks of coupled quantum dissipative oscillators.
Resumo:
It always has been a need for the abiltiy to create color proofs. When an error occurs late in the production process, itis allways complicated and difficult to correct the error. In this project, digital proofs been made and discussions havebeen held with several people in the printing industry, in order to examine how well excisting digital proofs, meet thedemand of the market. And how close the digital proofs can come to the actual printsheat from the press. The study hasbeen shown that the one thing that has had the most influence on the outcome for the quality of a digital proof, is theprintshop operator’s knowledge about color management and proofing systems. Many advertising agencies in the graphicindustry think rasterised proofs are not necessesary and expensive. Therefor they prefer a cheaper alternative, whichdoesn’t show colors as well as the rasterised proof, but well enough to be content with it. There are a good awarenessconcerning lack of communication between printshop, reproduction and advertising agency. Advertising agencies thinkthat printshop rarely listen to what they have to say, while the printshop think that the advertising agency doesn’t understandwhat they are trying to tell them. The outcome of the printed proofs in this study can’t be representive for howgood digital proofs are conducted in regular basis in the industry. The divergence between the print press sheat and thedigital proof that was made was bigger than expected. This shows that implementation of ICC profiles in a color managementflow, not alone is the answer to making perfect digital proofs. There are so many other issues that has to be examined,like color management software, measure tools and correct color management module. In order to make a perfectproof, you have to look at the whole picture. In the end, the human eye finally has the last word on wheather theproof is good or not.
Resumo:
When using the digital halftone proofing systems, a closer print match can be achieved compared to what earlier couldbe done with the analogue proofing systems. These proofing systems possibilities to produce accurate print match canas well lead to producing bad print matches as several print related parameters can be adjusted manually in the systemby the user. Therefore, more advanced knowledge in graphic arts technology is required by the user of the system.The prepress company Colorcraft AB wishes to control that their color proofs always have the right quality. This projectwas started with the purpose to find a quality control metod for Colorcraft´s digital halftone proofing system(Kodak Approval XP4).Using a software who supports spectral measuring combined with a spectrophotometer and a control bar, a qualitycontrol system was assembled. This system detects variations that lies out of the proofing system´s natural deviation.The prerequisite for this quality control system is that the tolerances are defined with consideration taken to the proofingsystems natural deviations. Othervise the quality control system will generate unnecessecary false alarms and thereforenot be reliable.
Resumo:
This thesis presents an adaptive tuning system that can be described as a dynamic Just Intonation tuning system, being compatible with equally tempered instruments. The tuning system is called Hermode Tuning (HMT) and the tuning used as comparison for evaluation is the standardized western tuning, the equal tempered tuning. This study investigates preferences for these two musical tuning systems, depending on whether the tunings are presented on a piano or with woodwind instruments. A listening test was done with students at the Falun Conservatory of Music, including both a vertical listening (intervalls) and a horizontal listening (cadenses and musical compositions) of Hermode tuned musical material. Overall the results showed no significant preferences for either tuning system irrespectively of what instrument it was presented with. The clearest results was that of a misjudged just intonated perfect third on the piano and a preference for an adaptively tuned piano presented in a simple harmonic structure, with a parameter setting of HMT 70%. Materials for comparison was partly taken from Hermode´s own website, but overall the attitude towards these sequenses (using a likert scale of one to five) showed a low expected value. This shows the complexity of the topic and no general conclusions regarding the choice of intonation or tuning system could be done for the presented material.
Resumo:
This report describes the work done creating a computer model of a kombi tank from Consolar. The model was created with Presim/Trnsys and Fittrn and DF were used to identify the parameters. Measurements were carried out and were used to identify the values of the parameters in the model. The identifications were first done for every circuit separately. After that, all parameters are normally identified together using all the measurements. Finally the model should be compared with other measurements, preferable realistic ones. The two last steps have not yet been carried out, because of problems finding a good model for the domestic hot water circuit.The model of the domestic hot water circuit give relatively good results for low flows at 5 l/min, but is not good for higher flows. In the report suggestions for improving the model are given. However, there was not enough time to test this within the project as much time was spent trying to solve problems with the model crashing. Suggestions for improving the model for the domestic circuit are given in chapter 4.4. The improved equations that are to be used in the improved model are given by equation 4.18, 4.19 and 4.22.Also for the boiler circuit and the solar circuit there are improvements that can be done. The model presented here has a few shortcomings, but with some extra work, an improved model can be created. In the attachment (Bilaga 1) is a description of the used model and all the identified parameters.A qualitative assessment of the store was also performed based on the measurements and the modelling carried out. The following summary of this can be given: Hot Water PreparationThe principle for controlling the flow on the primary side seems to work well in order to achieve good stratification. Temperatures in the bottom of the store after a short use of hot water, at a coldwater temperature of 12°C, was around 28-30°C. This was almost independent of the temperature in the store and the DHW-flow.The measured UA-values of the heat exchangers are not very reliable, but indicates that the heat transfer rates are much better than for the Conus 500, and in the same range as for other stores tested at SERC.The function of the mixing valve is not perfect (see diagram 4.3, where Tout1 is the outlet hot water temperature, and Tdhwo and Tdhw1 is the inlet temperature to the hot and cold side of the valve respectively). The outlet temperature varies a lot with different temperatures in the storage and is going down from 61°C to 47°C before the cold port is fully closed. This gives a problem to find a suitable temperature setting and gives also a risk that the auxiliary heating is increased instead of the set temperature of the valve, when the hot water temperature is to low.Collector circuitThe UA-value of the collector heat exchanger is much higher than the value for Conus 500, and in the same range as the heat exchangers in other stores tested at SERC.Boiler circuitThe valve in the boiler circuit is used to supply water from the boiler at two different heights, depending on the temperature of the water. At temperatures from the boiler above 58.2°C, all the water is injected to the upper inlet. At temperatures below 53.9°C all the water is injected to the lower inlet. At 56°C the water flow is equally divided between the two inlets. Detailed studies of the behaviour at the upper inlet shows that better accuracy of the model would have been achieved using three double ports in the model instead of two. The shape of the upper inlet makes turbulence, that could be modelled using two different inlets. Heat lossesThe heat losses per m3 are much smaller for the Solus 1050, than for the Conus 500 Storage. However, they are higher than those for some good stores tested at SERC. The pipes that are penetrating the insulation give air leakage and cold bridges, which could be a major part of the losses from the storage. The identified losses from the bottom of the storage are exceptionally high, but have less importance for the heat losses, due to the lower temperatures in the bottom. High losses from the bottom can be caused by air leakage through the insulation at the pipe connections of the storage.
Resumo:
Increasing costs and competitive business strategies are pushing sawmill enterprises to make an effort for optimization of their process management. Organizational decisions mainly concentrate on performance and reduction of operational costs in order to maintain profit margins. Although many efforts have been made, effective utilization of resources, optimal planning and maximum productivity in sawmill are still challenging to sawmill industries. Many researchers proposed the simulation models in combination with optimization techniques to address problems of integrated logistics optimization. The combination of simulation and optimization technique identifies the optimal strategy by simulating all complex behaviours of the system under consideration including objectives and constraints. During the past decade, an enormous number of studies were conducted to simulate operational inefficiencies in order to find optimal solutions. This paper gives a review on recent developments and challenges associated with simulation and optimization techniques. It was believed that the review would provide a perfect ground to the authors in pursuing further work in optimizing sawmill yard operations.
Resumo:
This licentiate thesis sets out to analyse how a retail price decision frame can be understood. It is argued that it is possible to view price determination within retailing by determining the level of rationality and using behavioural theories. In this way, it is possible to use assumptions derived from economics and marketing to establish a decision frame. By taking a management perspective, it is possible to take into consideration how it is assumed that the retailer should strategically manage price decisions, which decisions might be assumed to be price decisions, and which decisions can be assumed to be under the control of the retailer. Theoretically, this licentiate thesis has its foundations in different assumptions about decision frames regarding the level of information collected, the goal of the decisions, and the outcomes of the decisions. Since the concepts that are to be analysed within this thesis are price decisions, the latter part of the theory discusses price decision in specific: sequential price decisions, at the point of the decision, and trade-offs when making a decision. Here, it is evident that a conceptual decision frame that is intended to illustrate price decisions includes several aspects: several decision alternatives and what assumptions of rationality that can be made in relation to the decision frame. A semi-structured literature review was conducted. As a result, it became apparent that two important things in the decision frame were unclear: time assumptions regarding the decisions and the amount of information that is assumed in relation to the different decision alternatives. By using the same articles that were used to adjust the decision frame, a topical study was made in order to determine the time specific assumptions, as well as the analytical level based on the assumed information necessary for individual decision alternatives. This, together with an experimental study, was necessary to be able to discuss the consequences of the rationality assumption. When the retail literature is analysed for the level of rationality and consequences of assuming certain assumptions of rationality, three main things becomes apparent. First, the level of rationality or the assumptions of rationality are seldom made or accounted for in the literature. In fact, there are indications that perfect and bounded rationality assumptions are used simultaneously within studies. Second, although bounded rationality is a recognised theoretical perspective, very few articles seem to use these assumptions. Third, since the outcome of a price decision seems to provide no incremental sale, it is questionable which assumptions of rationality that should be used. It might even be the case that no assumptions of rationality at all should be used. In a broader perspective, the findings from this licentiate thesis show that the assumptions of rationality within retail research is unclear. There is an imbalance between the perspectives used, where the main assumptions seem to be concentrated to perfect rationality. However, it is suggested that by clarifying which assumptions of rationality that is used and using bounded rationality assumptions within research would result in a clearer picture of the multifaceted price decisions that could be assumed within retailing. The theoretical contribution of this thesis mainly surround the identification of how the level of rationality provides limiting assumptions within retail research. Furthermore, since indications show that learning might not occur within this specific context it is questioned whether the basic learning assumption within bounded rationality should be used in this context.
Resumo:
A system for weed management on railway embankments that is both adapted to the environment and efficient in terms of resources requires knowledge and understanding about the growing conditions of vegetation so that methods to control its growth can be adapted accordingly. Automated records could complement present-day manual inspections and over time come to replace these. One challenge is to devise a method that will result in a reasonable breakdown of gathered information that can be managed rationally by affected parties and, at the same time, serve as a basis for decisions with sufficient precision. The project examined two automated methods that may be useful for the Swedish Transport Administration in the future: 1) A machine vision method, which makes use of camera sensors as a way of sensing the environment in the visible and near infrared spectrum; and 2) An N-Sensor method, which transmits light within an area that is reflected by the chlorophyll in the plants. The amount of chlorophyll provides a value that can be correlated with the biomass. The choice of technique depends on how the information is to be used. If the purpose is to form a general picture of the growth of vegetation on railway embankments as a way to plan for maintenance measures, then the N-Sensor technique may be the right choice. If the plan is to form a general picture as well as monitor and survey current and exact vegetation status on the surface over time as a way to fight specific vegetation with the correct means, then the machine vision method is the better of the two. Both techniques involve registering data using GPS positioning. In the future, it will be possible to store this information in databases that are directly accessible to stakeholders online during or in conjunction with measures to deal with the vegetation. The two techniques were compared with manual (visual) estimations as to the levels of vegetation growth. The observers (raters) visual estimation of weed coverage (%) differed statistically from person to person. In terms of estimating the frequency (number) of woody plants (trees and bushes) in the test areas, the observers were generally in agreement. The same person is often consistent in his or her estimation: it is the comparison with the estimations of others that can lead to misleading results. The system for using the information about vegetation growth requires development. The threshold for the amount of weeds that can be tolerated in different track types is an important component in such a system. The classification system must be capable of dealing with the demands placed on it so as to ensure the quality of the track and other pre-conditions such as traffic levels, conditions pertaining to track location, and the characteristics of the vegetation. The project recommends that the Swedish Transport Administration: Discusses how threshold values for the growth of vegetation on railway embankments can be determined Carries out registration of the growth of vegetation over longer and a larger number of railway sections using one or more of the methods studied in the project Introduces a system that effectively matches the information about vegetation to its position Includes information about the growth of vegetation in the records that are currently maintained of the track’s technical quality, and link the data material to other maintenance-related databases Establishes a number of representative surfaces in which weed inventories (by measuring) are regularly conducted, as a means of developing an overview of the long-term development that can serve as a basis for more precise prognoses in terms of vegetation growth Ensures that necessary opportunities for education are put in place
Resumo:
Föreliggande studie syftar till att kartlägga vilka designfaktorer som främst förekommer på etiketter hos framgångsrika aleproducenter från olika länder, samt undersöka om konsumenter kan urskilja produktionsland endast genom granskning av etiketten. För att undersöka detta genomfördes en visuell innehållsanalys för att ta fram designfaktorer för fem utvalda länder, varefter fem typexempel skapades. Typexemplen testades på konsumenter via en webbenkät. Slutsatsen är att det i vissa fall går att ta fram ett typexempel och i vissa fall kan en del av konsumenterna se vilket det tänkta produktionslandet är. I majoriteten av fallen gick det emellertid inte att ta fram ett perfekt typexempel och konsumenten kunde inte urskilja det tänkta produktionslandet.
Resumo:
This paper presents a computer-vision based marker-free method for gait-impairment detection in Patients with Parkinson's disease (PWP). The system is based upon the idea that a normal human body attains equilibrium during the gait by aligning the body posture with Axis-of-Gravity (AOG) using feet as the base of support. In contrast, PWP appear to be falling forward as they are less-able to align their body with AOG due to rigid muscular tone. A normal gait exhibits periodic stride-cycles with stride-angle around 45o between the legs, whereas PWP walk with shortened stride-angle with high variability between the stride-cycles. In order to analyze Parkinsonian-gait (PG), subjects were videotaped with several gait-cycles. The subject's body was segmented using a color-segmentation method to form a silhouette. The silhouette was skeletonized for motion cues extraction. The motion cues analyzed were stride-cycles (based on the cyclic leg motion of skeleton) and posture lean (based on the angle between leaned torso of skeleton and AOG). Cosine similarity between an imaginary perfect gait pattern and the subject gait patterns produced 100% recognition rate of PG for 4 normal-controls and 3 PWP. Results suggested that the method is a promising tool to be used for PG assessment in home-environment.
Resumo:
The paper investigates which of Shannon’s measures (entropy, conditional entropy, mutual information) is the right one for the task of quantifying information flow in a programming language. We examine earlier relevant contributions from Denning, McLean and Gray and we propose and motivate a specific quantitative definition of information flow. We prove results relating equivalence relations, interference of program variables, independence of random variables and the flow of confidential information. Finally, we show how, in our setting, Shannon’s Perfect Secrecy theorem provides a sufficient condition to determine whether a program leaks confidential information.
Resumo:
It is rare for data's history to include computational processes alone. Even when software generates data, users ultimately decide to execute software procedures, choose their configuration and inputs, reconfigure, halt and restart processes, and so on. Understanding the provenance of data thus involves understanding the reasoning of users behind these decisions, but demanding that users explicitly document decisions could be intrusive if implemented naively, and impractical in some cases. In this paper, therefore, we explore an approach to transparently deriving the provenance of user decisions at query time. The user reasoning is simulated, and if the result of the simulation matches the documented decision, the simulation is taken to approximate the actual reasoning. The plausibility of this approach requires that the simulation mirror human decision -making, so we adopt an automated process explicitly modelled on human psychology. The provenance of the decision is modelled in OPM, allowing it to be queried as part of a larger provenance graph, and an OPM profile is provided to allow consistent querying of provenance across user decisions.