984 resultados para Extreme bounds analysis
Resumo:
What is the minimal size quantum circuit required to exactly implement a specified n-qubit unitary operation, U, without the use of ancilla qubits? We show that a lower bound on the minimal size is provided by the length of the minimal geodesic between U and the identity, I, where length is defined by a suitable Finsler metric on the manifold SU(2(n)). The geodesic curves on these manifolds have the striking property that once an initial position and velocity are set, the remainder of the geodesic is completely determined by a second order differential equation known as the geodesic equation. This is in contrast with the usual case in circuit design, either classical or quantum, where being given part of an optimal circuit does not obviously assist in the design of the rest of the circuit. Geodesic analysis thus offers a potentially powerful approach to the problem of proving quantum circuit lower bounds. In this paper we construct several Finsler metrics whose minimal length geodesics provide lower bounds on quantum circuit size. For each Finsler metric we give a procedure to compute the corresponding geodesic equation. We also construct a large class of solutions to the geodesic equation, which we call Pauli geodesics, since they arise from isometries generated by the Pauli group. For any unitary U diagonal in the computational basis, we show that: (a) provided the minimal length geodesic is unique, it must be a Pauli geodesic; (b) finding the length of the minimal Pauli geodesic passing from I to U is equivalent to solving an exponential size instance of the closest vector in a lattice problem (CVP); and (c) all but a doubly exponentially small fraction of such unitaries have minimal Pauli geodesics of exponential length.
Resumo:
In this paper we introduce and illustrate non-trivial upper and lower bounds on the learning curves for one-dimensional Gaussian Processes. The analysis is carried out emphasising the effects induced on the bounds by the smoothness of the random process described by the Modified Bessel and the Squared Exponential covariance functions. We present an explanation of the early, linearly-decreasing behavior of the learning curves and the bounds as well as a study of the asymptotic behavior of the curves. The effects of the noise level and the lengthscale on the tightness of the bounds are also discussed.
Resumo:
Determining an appropriate research methodology is considered as an important element in a research study; especially in a doctoral research study. It involves approach to the entire process of a research study, starting from theoretical underpinnings and spanning to data collection and analysis, and extending to developing the solutions for the problems investigated. Research methodology in essence is focused around the problems to be investigated in a research study and therefore varies according to the problems investigated. Thus, identifying the research methodology that best suits a research in hand is important, not only as it will benefit achieving the set objectives of a research, but also as it will serve establishing the credibility of the work. Research philosophy, approach, strategy, choice, and techniques are inherent components of the methodology. Research strategy provides the overall direction of the research including the process by which the research is conducted. Case study, experiment, survey, action research, grounded theory and ethnography are examples for such research strategies. Case study is documented as an empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident. Case study was adopted as the overarching research strategy, in a doctoral study developed to investigate the resilience of construction Small and Medium-sized Enterprises (SMEs) in the UK to extreme weather events. The research sought to investigate how construction SMEs are affected by EWEs, respond to the risk of EWEs, and means of enhancing their resilience to future EWEs. It is argued that utilising case study strategy will benefit the research study, in achieving the set objectives of the research and answering the research questions raised, by comparing and contrasting with the alternative strategies available. It is also claimed that the selected strategy will contribute towards addressing the call for improved methodological pluralism in construction management research, enhancing the understanding of complex network of relationships pertinent to the industry and the phenomenon being studied.
Resumo:
In the traditional TOPSIS, the ideal solutions are assumed to be located at the endpoints of the data interval. However, not all performance attributes possess ideal values at the endpoints. We termed performance attributes that have ideal values at extreme points as Type-1 attributes. Type-2 attributes however possess ideal values somewhere within the data interval instead of being at the extreme end points. This provides a preference ranking problem when all attributes are computed and assumed to be of the Type-1 nature. To overcome this issue, we propose a new Fuzzy DEA method for computing the ideal values and distance function of Type-2 attributes in a TOPSIS methodology. Our method allows Type-1 and Type-2 attributes to be included in an evaluation system without compromising the ranking quality. The efficacy of the proposed model is illustrated with a vendor evaluation case for a high-tech investment decision making exercise. A comparison analysis with the traditional TOPSIS is also presented. © 2012 Springer Science+Business Media B.V.
Resumo:
Determining an appropriate research methodology is considered as an important element in a research study; especially in a doctoral research study. It involves approach to the entire process of a research study, starting from theoretical underpinnings and spanning to data collection and analysis, and extending to developing the solutions for the problems investigated. Research methodology in essence is focused around the problems to be investigated in a research study and therefore varies according to the problems investigated. Thus, identifying the research methodology that best suits a research in hand is important, not only as it will benefit achieving the set objectives of a research, but also as it will serve establishing the credibility of the work. Research philosophy, approach, strategy, choice, and techniques are inherent components of the methodology. Research strategy provides the overall direction of the research including the process by which the research is conducted. Case study, experiment, survey, action research, grounded theory and ethnography are examples for such research strategies. Case study is documented as an empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident. Case study was adopted as the overarching research strategy, in a doctoral study developed to investigate the resilience of construction Small and Medium-sized Enterprises (SMEs) in the UK to extreme weather events. The research sought to investigate how construction SMEs are affected by EWEs, respond to the risk of EWEs, and means of enhancing their resilience to future EWEs. It is argued that utilising case study strategy will benefit the research study, in achieving the set objectives of the research and answering the research questions raised, by comparing and contrasting with the alternative strategies available. It is also claimed that the selected strategy will contribute towards addressing the call for improved methodological pluralism in construction management research, enhancing the understanding of complex network of relationships pertinent to the industry and the phenomenon being studied.
Resumo:
* The work is supported by RFBR, grant 04-01-00858-a
Resumo:
Mathematics Subject Classification: 47A56, 47A57,47A63
Resumo:
AMS subject classification: 90C30, 90C33.
Resumo:
2010. július 20-án megkezdte működését a magyar áramtőzsde, a HUPX. 2010. augusztus 16-án az első napokban tapasztalt 45-60 euró megawattórás ár helyett egyes órákban 2999 eurós árral szembesültek a piaci szereplők. A kiemelkedően magas árak megjelenése nem szokatlan az áramtőzsdéken a nemzetközi tapasztalatok szerint, sőt a kutatások kiemelten foglalkoznak az ún. ártüskék okainak felkutatásával, valamint megjelenésük kvantitatív és kvalitatív elemzésével. A cikkben a szerző bemutatja, milyen eredmények születtek a kiugró árak statisztikai vizsgálatai során a szakirodalomban, illetve azok következtetései hogyan állják meg a helyüket a magyar árak idősorát figyelembe véve. A szerző bemutat egy modellkeretet, amely a villamosenergia-árak viselkedését a hét órái szerint periodikusan váltakozó paraméterű eloszlásokkal írja le. A magyar áramtőzsde rövid története sajnos nem teszi lehetővé, hogy a hét minden órájára külön áreloszlást illeszthessünk. A szerző ezért a hét óráit két csoportba sorolja az ár eloszlásának jellege alapján: az ártüskék megjelenése szempontjából kockázatos és kevésbé kockázatos órákba. Ezután a HUPX-árak leírására felépít egy determinisztikus, kétállapotú rezsimváltó modellt, amellyel azonosítani lehet a kockázatos és kevésbé kockázatos órákat, valamint képet kaphatunk az extrém ármozgások jellegéről. / === / On 20th July, 2010 the Hungarian Power Exchange, the HUPX started its operation. On 16th August in certain hours the markets participants faced € 2,999 price instead of in the first days experienced 45-60 euros/mwh. According to the international experiences the appearance of the extremely high prices hasn’t been unusual in the power exchanges, the researches have focused exploring the causes of the so-called spikes and quantitative and qualitative analysis of those appearances. In this article the author describes what results were determined on statistical studies of outstanding prices in the literature, and how their conclusions stand up into account the time series of the Hungarian prices. The author presents a model framework which describes the behavior of electricity prices in the seven hours of periodically varying parameters. Unfortunately the brief history of the Hungarian Power Exchange does not allow to suit specific prices for each hour of week. Therefore the author classifies the hours of the week in the two groups based on the nature of price dispersion: according to the appearance of spikes to risky and less risky classes. Then for describing the HUPX prices the author builds a deterministic two-state, regime-changing model, which can be identified the risky and less risky hours, and to get a picture of the nature of extreme price movements.
Resumo:
Materials known as Mn+1AXn phases, where n is 1, 2, or 3, and M represents an early transition metal, A an A-group element, and X is either Carbon and/or Nitrogen [1], are fast becoming technologically important materials due to the interesting combination of unique properties. However, a lot of important information about the high temperature and high pressure behavior of many of these compounds is still missing, which needs to be determined systematically. ^ In this dissertation the synthesis of M2AC (M = Ti, V, Cr, Nb, Zr) and A = (Al, Sn, S) compounds by arc melting, vacuum sintering and piston cylinder synthesis is presented along with the synthesis of Zr 2SC, which has been synthesized for first time in bulk form, by piston cylinder technique. The microstructural analysis by electron microscopy and phase analysis by x-ray diffraction is presented next. Finally, a critical analysis of the behavior of these compounds under the application of extreme pressure (as high as 50 GPa) and temperature (≈ 1000°C) is presented. ^ The high pressure studies, up to 50 GPa, showed that these compounds were structurally intact and their bulk moduli ranged from 140 to 190 GPa. The high temperature studies in the inert atmosphere showed that the M 2SnC compounds were unstable above 650°C and the expansion along the a-axis was higher than that along the c-axis, unlike the other phases. M2SC compounds on the other hand showed negligible difference in the thermal expansion along the two axes. The oxidation study revealed that Ti2AC (Al, S) compounds had highest resistance to oxidation while the M2SnC compounds had the least. Furthermore, from the oxidation study of these compounds, which were short time oxidation experiments, it was found that all of these compounds oxidized to their respective binary oxides. ^
Resumo:
Understanding who evacuates and who does not has been one of the cornerstones of research on the pre-impact phase of both natural and technological hazards. Its history is rich in descriptive illustrations focusing on lists of characteristics of those who flee to safety. Early models of evacuation focused almost exclusively on the relationship between whether warnings were heard and ultimately believed and evacuation behavior. How people came to believe these warnings and even how they interpreted the warnings were not incorporated. In fact, the individual seemed almost removed from the picture with analysis focusing exclusively on external measures. ^ This study built and tested a more comprehensive model of evacuation that centers on the decision-making process, rather than decision outcomes. The model focused on three important factors that alter and shape the evacuation decision-making landscape. These factors are: individual level indicators which exist independently of the hazard itself and act as cultural lenses through which information is heard, processed and interpreted; hazard specific variables that directly relate to the specific hazard threat; and risk perception. The ultimate goal is to determine what factors influence the evacuation decision-making process. Using data collected for 1998's Hurricane Georges, logistic regression models were used to evaluate how well the three main factors help our understanding of how individuals come to their decisions to either flee to safety during a hurricane or remain in their homes. ^ The results of the logistic regression were significant emphasizing that the three broad types of factors tested in the model influence the decision making process. Conclusions drawn from the data analysis focus on how decision-making frames are different for those who can be designated “evacuators” and for those in evacuation zones. ^
Resumo:
This research investigates a new structural system utilising modular construction. Five-sided boxes are cast on-site and stacked together to form a building. An analytical model was created of a typical building in each of two different analysis programs utilising the finite element method (Robot Millennium and ETABS). The pros and cons of both Robot Millennium and ETABS are listed at several key stages in the development of an analytical model utilising this structural system. Robot Millennium was initially utilised but created an analytical model too large to be successfully run. The computation requirements were too large for conventional computers. Therefore Robot Millennium was abandoned in favour of ETABS, whose more simplistic algorithms and assumptions permitted running this large computation model. Tips are provided as well as pitfalls signalled throughout the process of modelling such complex buildings of this type. ^ The building under high seismic loading required a new horizontal shear mechanism. This dissertation has proposed to create a secondary floor that ties to the modular box through the use of gunwales, and roughened surfaces with epoxy coatings. In addition, vertical connections necessitated a new type of shear wall. These shear walls consisted of waffled external walls tied through both reinforcement and a secondary concrete pour. ^ This structural system has generated a new building which was found to be very rigid compared to a conventional structure. The proposed modular building exhibited a period of 1.27 seconds, which is about one-fifth of a conventional building. The maximum lateral drift occurs under seismic loading with a magnitude of 6.14 inches which is one-quarter of a conventional building's drift. The deflected shape and pattern of the interstorey drifts are consistent with those of a coupled shear wall building. In conclusion, the computer analysis indicate that this new structure exceeds current code requirements for both hurricane winds and high seismic loads, and concomitantly provides a shortened construction time with reduced funding. ^
Resumo:
Nanocrystalline and bulk samples of “Fe”-doped CuO were prepared by coprecipitation and ceramic methods. Structural and compositional analyses were performed using X-ray diffraction, SEM, and EDAX. Traces of secondary phases such as CuFe2O4, Fe3O4, and α-Fe2O3 having peaks very close to that of the host CuO were identified from the Rietveld profile analysis and the SAED pattern of bulk and nanocrystalline Cu0.98Fe0.02O samples. Vibrating Sample Magnetometer (VSM) measurements show hysteresis at 300 K for all the samples. The ferrimagnetic Neel transition temperature () was found to be around 465°C irrespective of the content of “Fe”, which is close to the value of cubic CuFe2O4. High-pressure X-Ray diffraction studies were performed on 2% “Fe”-doped bulk CuO using synchrotron radiation. From the absence of any strong new peaks at high pressure, it is evident that the secondary phases if present could be less than the level of detection. Cu2O, which is diamagnetic by nature, was also doped with 1% of “Fe” and was found to show paramagnetic behavior in contrast to the “Fe” doped CuO. Hence the possibility of intrinsic magnetization of “Fe”-doped CuO apart from the secondary phases is discussed based on the magnetization and charge state of “Fe” and the host into which it is substituted.
Resumo:
The purpose of this study was to determine the flooding potential of contaminated areas within the White Oak Creek watershed in the Oak Ridge Reservation in Tennessee. The watershed was analyzed with an integrated surface and subsurface numerical model based on MIKE SHE/MIKE 11 software. The model was calibrated and validated using five decades of historical data. A series of simulations were conducted to determine the watershed response to 25 year, 100 year and 500 year precipitation forecasts; flooding maps were generated for those events. Predicted flood events were compared to Log Pearson III flood flow frequency values for validation. This investigation also provides an improved understanding of the water fluxes between the surface and subsurface subdomains as they affect flood frequencies. In sum, this study presents crucial information to further assess the environmental risks of potential mobilization of contaminants of concern during extreme precipitation events.