441 resultados para usable leftovers
Resumo:
This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.
Resumo:
Epitaxial bilayered thin films composed of ferromagnetic La0.6Sr0.4MnO3 and ferroelectric 0.7Pb (Mg1/3Nb2/3)O3-0.3(PbTiO3) were fabricated on LaAlO3 (100) substrates by pulsed laser ablation. Ferroelectric, ferromagnetic and magneto-dielectric characterizations performed earlier indicated the possible existence of strain-mediated magneto-electric coupling in these biferroic heterostructures. In order to investigate their true remnant polarization characteristics, usable in devices, room-temperature polarization versus electric field, positive-up negative-down (PUND) pulse polarization studies and remnant hysteresis measurements were carried out. The PUND and remnant hysteresis measurements revealed the significant contribution of the non-remnant component in the observed polarization hysteresis response of these heterostructures. (C) 2010 Published by Elsevier Ltd
Resumo:
The purpose of this work is to use the concepts of human time and cultural trauma in a biographical study of the turning points in the recent history of Estonia. This research is primarily based on 148 in-depth biographical interviews conducted in Estonia and Sweden in 1995-2005, supplemented by excerpts from 5 collections and 10 individually published autobiographies. The main body of the thesis consists of six published and of two forthcoming separate refereed articles, summarised in the theoretical introduction, and Appendix of the full texts of three particular life stories. The topic of the first article is the generational composition and the collective action frames of anti-Soviet social mobilisation in Estonia in 1940-1990. The second article details the differentiation of the rites of passage and the calendar traditions as a strategy to adapt to the rapidly changed political realities, comparatively in Soviet Estonia and among the boat-refugees in Sweden. The third article investigates the life stories of the double-minded strategic generation of the Estonian-inclined Communists, who attempted to work within the Soviet system while professing to uphold the ideals of pre-war Estonia. The fourth article is concentrated on the problems of double mental standards as a coping strategy in a contradictory social reality. The fifth article implements the theory of cultural trauma for the social practice of singing nationalism in Estonia. The sixth article bridges the ideas of Russian theoreticians concerning cultural dialogue and the Western paradigm of cultural trauma, with examples from Estonian Russian life stories. The seventh article takes a biographical look at the logic of the unraveling of cultural trauma through four Soviet decades. The eighth article explores the re-shaping of citizen activities as a strategy of coping with the loss of the independent nation state, comparatively in Soviet Estonia and among Swedish Estonians. Cultural trauma is interpreted as the re-ordering of the society s value-normative constellation due to sharp, violent, usually political events. The first one under consideration was caused by the occupations of the Republic of Estonia by the Soviet army in 1940-45. After half a century of suppression the memories of these events resurfaced as different stories describing the long-term, often inter-generational strategies of coping with the value collapse. The second cultural trauma is revealed together with the collapse of the Soviet power and ideology in Estonia in 1991. According to empirical data, the following three trauma discourses have been reconstructed: - the forced adaptation to Soviet order of the homeland Estonians; - the difficulty of preserving Estonian identity in exile (Sweden); - the identity crisis of the Russian population of Estonia. Comparative analyses of these discourses have shown that opposing experiences and worldviews cause conflicting interpretations of the past. Different social and ethnic groups consider coping with cultural trauma as a matter of self-defence and create appropriate usable pasts to identify with. Keywords: human time, cultural trauma, frame analysis, discourse, life stories
Resumo:
The aim of this report is to discuss the role of the relationship type and communication in two Finnish food chains, namely the pig meat-to-sausage (pig meat chain) and the cereal-to-rye bread (rye chain) chains. Furthermore, the objective is to examine those factors influencing the choice of a relationship type and the sustainability of a business relationship. Altogether 1808 questionnaires were sent to producers, processors and retailers operating in these two chains of which 224 usable questionnaires were returned (the response rate being 12.4%). The great majority of the respondents (98.7%) were small businesses employing less than 50 people. Almost 70 per cent of the respondents were farmers. In both chains, formal contracts were stated to be the most important relationship type used with business partners. Although for many businesses written contracts are a common business practice, the essential role of the contracts was the security they provide regarding the demand/supply and quality issues. Relative to the choice of the relationship types, the main difference between the two chains emerged especially with the prevalence of spot markets and financial participation arrangements. The usage of spot markets was significantly more common in the rye chain when compared to the pig meat chain, while, on the other hand, financial participation arrangements were much more common among the businesses in the pig meat chain than in the rye chain. Furthermore, the analysis showed that most of the businesses in the pig meat chain claimed not to be free to choose the relationship type they use. Especially membership in a co-operative and practices of a business partner were mentioned as the reasons limiting this freedom of choice. The main business relations in both chains were described as having a long-term orientation and being based on formal written contracts. Typical for the main business relationships was also that they are not based on the existence of the key persons only; the relationship would remain even if the key people left the business. The quality of these relationships was satisfactory in both chains and across all the stakeholder groups, though the downstream processors and the retailers had a slightly more positive view on their main business partners than the farmers and the upstream processors. The businesses operating in the pig meat chain seemed also to be more dependent on their main business relations when compared to the businesses in the rye chain. Although the communication means were rather similar in both chains (the phone being the most important), there was some variation between the chains concerning the communication frequency necessary to maintain the relationship with the main business partner. In short, the businesses in the pig meat chain seemed to appreciate more frequent communication with their main business partners when compared to the businesses in the rye chain. Personal meetings with the main business partners were quite rare in both chains. All the respondent groups were, however, fairly satisfied with the communication frequency and information quality between them and the main business partner. The business cultures could be argued to be rather hegemonic among the businesses both in the pig meat and rye chains. Avoidance of uncertainty, appreciation of long-term orientation and independence were considered important factors in the business cultures. Furthermore, trust, commitment and satisfaction in business partners were thought to be essential elements of business operations in all the respondent groups. In order to investigate which factors have an effect on the choice of a relationship type, several hypotheses were tested by using binary and multinomial logit analyses. According to these analyses it could be argued that avoidance of uncertainty and risk has a certain effect on the relationship type chosen, i.e. the willingness to avoid uncertainty increases the probability to choose stable relationships, like repeated market transactions and formal written contracts, but not necessary those, which require high financial commitment (like financial participation arrangements). The probability of engaging in financial participation arrangements seemed to increase with long-term orientation. The hypotheses concerning the sustainability of the economic relations were tested by using structural equation model (SEM). In the model, five variables were found to have a positive and statistically significant impact on the sustainable economic relationship construct. Ordered relative to their importance, those factors are: (i) communication quality, (ii) personal bonds, (iii) equal power distribution, (iv) local embeddedness and (v) competition.
Resumo:
Introduction. We estimate the total yearly volume of peer-reviewed scientific journal articles published world-wide as well as the share of these articles available openly on the Web either directly or as copies in e-print repositories. Method. We rely on data from two commercial databases (ISI and Ulrich's Periodicals Directory) supplemented by sampling and Google searches. Analysis. A central issue is the finding that ISI-indexed journals publish far more articles per year (111) than non ISI-indexed journals (26), which means that the total figure we obtain is much lower than many earlier estimates. Our method of analysing the number of repository copies (green open access) differs from several earlier studies which have studied the number of copies in identified repositories, since we start from a random sample of articles and then test if copies can be found by a Web search engine. Results. We estimate that in 2006 the total number of articles published was approximately 1,350,000. Of this number 4.6% became immediately openly available and an additional 3.5% after an embargo period of, typically, one year. Furthermore, usable copies of 11.3% could be found in subject-specific or institutional repositories or on the home pages of the authors. Conclusions. We believe our results are the most reliable so far published and, therefore, should be useful in the on-going debate about Open Access among both academics and science policy makers. The method is replicable and also lends itself to longitudinal studies in the future.
Resumo:
Service researchers have repeatedly claimed that firms should acquire customer information in order to develop services that fit customer needs. Despite this, studies that would concentrate on the actual use of customer information in service development are lacking. The present study fulfils this research gap by investigating information use during a service development process. It demonstrates that use is not a straightforward task that automatically follows the acquisition of customer information. In fact, out of the six identified types of use, four represent non usage of customer information. Hence, the study demonstrates that the acquisition of customer information does not guarantee that the information will actually be used in development. The current study used an ethnographic approach. Consequently, the study was conducted in the field in real time over an extensive period of 13 months. Participant observation allowed direct access to the investigated phenomenon, i.e. the different types of use by the observed development project members were captured while they emerged. In addition, interviews, informal discussions and internal documents were used to gather data. A development process of a bank’s website constituted the empirical context of the investigation. This ethnography brings novel insights to both academia and practice. It critically questions the traditional focus on the firm’s acquisition of customer information and suggests that this focus ought to be expanded to the actual use of customer information. What is the point in acquiring costly customer information if it is not used in the development? Based on the findings of this study, a holistic view on customer information, “information in use” is generated. This view extends the traditional view of customer information in three ways: the source, timing and form of data collection. First, the study showed that the customer information can come explicitly from the customer, from speculation among the developers or it can already exist implicitly. Prior research has mainly focused on the customer as the information provider and the explicit source to turn to for information. Second, the study identified that the used and non-used customer information was acquired both previously, and currently within the time frame of the focal development process, as well as potentially in the future. Prior research has primarily focused on the currently acquired customer information, i.e. within the timeframe of the development process. Third, the used and non-used customer information was both formally and informally acquired. In prior research a large number of sophisticated formal methods have been suggested for the acquisition of customer information to be used in development. By focusing on “information in use”, new knowledge on types of customer information that are actually used was generated. For example, the findings show that the formal customer information acquired during the development process is used less than customer information already existent within the firm. With this knowledge at hand, better methods to capture this more usable customer information can be developed. Moreover, the thesis suggests that by focusing stronger on use of customer information, service development processes can be restructured in order to facilitate the information that is actually used.
Resumo:
This study examined the effects of the Greeks of the options and the trading results of delta hedging strategies, with three different time units or option-pricing models. These time units were calendar time, trading time and continuous time using discrete approximation (CTDA) time. The CTDA time model is a pricing model, that among others accounts for intraday and weekend, patterns in volatility. For the CTDA time model some additional theta measures, which were believed to be usable in trading, were developed. The study appears to verify that there were differences in the Greeks with different time units. It also revealed that these differences influence the delta hedging of options or portfolios. Although it is difficult to say anything about which is the most usable of the different time models, as this much depends on the traders view of the passing of time, different market conditions and different portfolios, the CTDA time model can be viewed as an attractive alternative.
Resumo:
Pricing American put options on dividend-paying stocks has largely been ignored in the option pricing literature because the problem is mathematically complex and valuation usually resorts to computationally expensive and impractical pricing applications. This paper computed a simulation study, using two different approximation methods for the valuation of American put options on a stock with known discrete dividend payments. This to find out if there were pricing errors and to find out which could be the most usable method for practical users. The option pricing models used in the study was the dividend approximation by Blomeyer (1986) and the one by Barone-Adesi and Whaley (1988). The study showed that the approximation method by Blomeyer worked satisfactory for most situations, but some errors occur for longer times to the dividend payment, for smaller dividends and for in-the-money options. The approximation method by Barone-Adesi and Whaley worked well for in-the-money options and at-the-money options, but had serious pricing errors for out-of-the-money options. The conclusion of the study is that a combination of the both methods might be preferable to any single model.
Resumo:
This paper describes the use of simulation in the planning and operation of a small fleet of aircraft typical of the air force of a developing country. We consider a single flying base, where the opera tionally ready aircraft are stationed, and a repair depot, where the planes are overhauled. The measure of effectiveness used is "system availability, the percentage of airplanes that are usable. The system is modeled in GPSS as a cyclic queue process. The simulation model is used to perform sensitivity analyses and to validate the principal assumptions of the analytical model on which the simulation model is based.
Resumo:
This dissertation deals with the design, fabrication, and applications of microscale electrospray ionization chips for mass spectrometry. The microchip consists of microchannel, which leads to a sharp electrospray tip. Microchannel contain micropillars that facilitate a powerful capillary action in the channels. The capillary action delivers the liquid sample to the electrospray tip, which sprays the liquid sample to gas phase ions that can be analyzed with mass spectrometry. The microchip uses a high voltage, which can be utilized as a valve between the microchip and mass spectrometry. The microchips can be used in various applications, such as for analyses of drugs, proteins, peptides, or metabolites. The microchip works without pumps for liquid transfer, is usable for rapid analyses, and is sensitive. The characteristics of performance of the single microchips are studied and a rotating multitip version of the microchips are designed and fabricated. It is possible to use the microchip also as a microreactor and reaction products can be detected online with mass spectrometry. This property can be utilized for protein identification for example. Proteins can be digested enzymatically on-chip and reaction products, which are in this case peptides, can be detected with mass spectrometry. Because reactions occur faster in a microscale due to shorter diffusion lengths, the amount of protein can be very low, which is a benefit of the method. The microchip is well suited to surface activated reactions because of a high surface-to-volume ratio due to a dense micropillar array. For example, titanium dioxide nanolayer on the micropillar array combined with UV radiation produces photocatalytic reactions which can be used for mimicking drug metabolism biotransformation reactions. Rapid mimicking with the microchip eases the detection of possibly toxic compounds in preclinical research and therefore could speed up the research of new drugs. A micropillar array chip can also be utilized in the fabrication of liquid chromatographic columns. Precisely ordered micropillar arrays offer a very homogenous column, where separation of compounds has been demonstrated by using both laser induced fluorescence and mass spectrometry. Because of small dimensions on the microchip, the integrated microchip based liquid chromatography electrospray microchip is especially well suited to low sample concentrations. Overall, this work demonstrates that the designed and fabricated silicon/glass three dimensionally sharp electrospray tip is unique and facilitates stable ion spray for mass spectrometry.
Resumo:
Microcatchment water harvesting (MCWH) improved the survival and growth of planted trees on heavy soils in eastern Kenya five to six years after planting. In the best method, the cross-tied furrow microcatchments, the mean annual increments (MAI; based on the average biomass of living trees multiplied by tree density and survival) of the total and usable biomass in Prosopis juliflora were 2787 and 1610 kg ha-1 a-1 respectively, when the initial tree density was 500 to 1667 trees per hectare. Based on survival, the indigenous Acacia horrida, A. mellifera and A. zanzibarica were the most suitable species for planting using MCWH. When both survival and yield were considered, a local seed source of the introduced P. juliflora was superior to all other species. The MAI in MCWH was at best distinctly higher than that in the natural vegetation (163307 and 66111 kg ha-1 a-1 for total and usable biomass respectively); this cannot satisfy the fuelwood demand of concentrated populations, such as towns or irrigation schemes. The density of seeds of woody species in the topsoil was 40.1 seeds m-2 in the Acacia-Commiphora bushland and 12.6 seeds m-2 in the zone between the bushland and the Tana riverine forest. Rehabilitation of woody vegetation using the soil seed bank alone proved difficult due to the lack of seeds of desirable species. The regeneration and dynamics of woody vegetation were also studied both in cleared and undisturbed bushland. A sub-type of Acacia-Commiphora bushland was identified as Acacia reficiens bushland, in which the dominant Commiphora species is C. campestris. Most of the woody species did not have even-aged populations but cohort structures that were skewed towards young individuals. The woody vegetation and the status of soil nutrients were estimated to recover in 1520 years on Vertic Natrargid soils after total removal of above-ground vegetation.
Resumo:
The prevalent virtualization technologies provide QoS support within the software layers of the virtual machine monitor(VMM) or the operating system of the virtual machine(VM). The QoS features are mostly provided as extensions to the existing software used for accessing the I/O device because of which the applications sharing the I/O device experience loss of performance due to crosstalk effects or usable bandwidth. In this paper we examine the NIC sharing effects across VMs on a Xen virtualized server and present an alternate paradigm that improves the shared bandwidth and reduces the crosstalk effect on the VMs. We implement the proposed hardwaresoftware changes in a layered queuing network (LQN) model and use simulation techniques to evaluate the architecture. We find that simple changes in the device architecture and associated system software lead to application throughput improvement of up to 60%. The architecture also enables finer QoS controls at device level and increases the scalability of device sharing across multiple virtual machines. We find that the performance improvement derived using LQN model is comparable to that reported by similar but real implementations.
Resumo:
Piezoelectric-device-based vibration energy harvesting requires a rectifier for conversion of input ac to usable dc form. Power loss due to diode drop in rectifier is a significant fraction of the already low levels of harvested power. The proposed circuit is a low-drop-diode equivalent, which mimics a diode using linear region-operated MOSFET. The proposed diode equivalent is powered directly from input signal and requires no additional power supply for its control. Power used by the control circuit is kept at a bare minimum to have an overall output power improvement. Diode equivalent was used to replace the four diodes in a full-wave bridge rectifier, which is the basic full- wave rectifier and is a part of the more advanced rectifiers like switch-only and bias-flip rectifiers. Simulation in 130-nm technology and experiment with discrete components show that a bridge rectifier with the proposed diode provides a 30-169% increase in output power extracted from piezoelectric device, as compared to a bridge rectifier with diode-connected MOSFETs. The bridge rectifier with the proposed diode can extract 90% of the maximum available power from an ideal piezoelectric device-bridge rectifier circuit. Setting aside the constraint of power loss, simulations indicate that diode drop as low as 10 mV at 38 mu A can be achieved.
Resumo:
Ag-Ni films were electrodeposited over a Cu substrate. Structural characterization revealed a fibrous microstructure with an amorphous structure for the as-deposited film. Isothermal annealing at 400 degrees C of the film inside transmission electron microscope led to amorphous-to-crystalline transition along with the evolution of nano-sized particles in the microstructure. The crystalline phase was Ni-Ag solid solution. The relative volume fraction of the nano-sized particles increased gradually with time. There was however no detectable decomposition of solid solution phase till about 4 h of annealing. Beyond 4 h phase separation initiated and pure Ag and Ni phases formed in the film. This study provides a methodology by which microstructural engineering of as-electrodeposited amorphous Ag-Ni films can be conducted to isolate a particular microstructure in order to tap specific potentially usable functionalities. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
A micro-newton static force sensor is presented here as a packaged product. The sensor, which is based on the mechanics of deformable objects, consists of a compliant mechanism that amplifies the displacement caused by the force that is to be measured. The output displacement, captured using a digital microscope and analyzed using image processing techniques, is used to calculate the force using precalibrated force-displacement curve. Images are scanned in real time at a frequency of 15 frames per second and sampled at around half the scanning frequency. The sensor was built, packaged, calibrated, and tested. It has simulated and measured stiffness values of 2.60N/m and 2.57N/m, respectively. The smallest force it can reliably measure in the presence of noise is about 2 mu N over a range of 1.4mN. The off-the-shelf digital microscope aside, all of its other components are purely mechanical; they are inexpensive and can be easily made using simple machines. Another highlight of the sensor is that its movable and delicate components are easily replaceable. The sensor can be used in aqueous environment as it does not use electric, magnetic, thermal, or any other fields. Currently, it can only measure static forces or forces that vary at less than 1Hz because its response time and bandwidth are limited by the speed of imaging with a camera. With a universal serial bus (USB) connection of its digital microscope, custom-developed graphical user interface (GUI), and related software, the sensor is fully developed as a readily usable product.