943 resultados para Developers of Java system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to investigate the effects of biosurfactants and organic matter amendments on the bioremediation of diesel contaminated soil. Two strains of Pseudomonas aeruginosa with the ability to produce biosurfactant were isolated from a water and soil sample in Co. Sligo. The first strain, Isolate A, produced a biosurfactant which contained four rhamnose containing compounds, when grown in proteose peptone glucose ammonium salts medium with glucose as the carbon source. Two of the components were identified as rhamnolipid 1 and 2 whilst the other two components were unidentified. The second strain, Isolate GO, when grown in similar conditions produced a biosurfactant which contained only rhamnolipid 2. The type of aeration system used had a significant effect on the abiotic removal of diesel from soil. Forced aeration at a rate of 120L 02/kg soil/ hour resulted in the greatest removal. Over a 112 day incubation period this type o f aeration resulted in the removal o f 48% o f total hexane extractable material. In relation to bioremediation of the diesel contaminated sandy soil, amending the soil with two inorganic nutrients, KH2PO4 and NÜ4N03, significantly enhanced the removal of diesel, especially the «- alkanes, when compared to an unamended control. The biosurfactant from Isolate A and a biosurfactant produced by Pseudomonas aeruginosa NCIMB 8628 (a known biosurfactant producer), when applied at a concentration of three times their critical micelle concentration, had a neutral effect on the biodégradation o f diesel contaminated sandy soil, even in the presence o f inorganic nutrients. It was deduced that the main reason for this neutral effect was because they were both readily biodegraded by the indigenous microorganisms. The most significant removal of diesel occurred when the soils were amended with two organic materials plus the inorganic nutrients. Amendment of the diesel contaminated soil with spent brewery grain (SBG) removed significantly more diesel than amendment with dried molassed sugar beet pulp (DMSBP). After a 108 day incubation period, amendment of the diesel contaminated soil with DMSBP plus inorganic nutrients and SBG plus inorganic nutrients resulted in 72 and 89% removal of diesel range organics (DRO), in comparison to 41% removal of DRO in an inorganic nutrient amended control. The first order kinetic model described the degradation of the different diesel components with high correlation and was used to calculate Vi lives. The V2 life, of the total «-alkanes in the diesel was reduced from 40 days in the control to 8.5 and 5.1 days in the presence of DMSBP and SBG, respectively. The V2 life o f the unresolved complex mixture (UCM) in the diesel contaminated soil was also significantly reduced in the presence o f the two organics. DMSBP and SBG addition reduced UCM V2 life to 86 and 43 days, respectively, compared to 153 days in the control. The component of diesel whose removal was enhanced the greatest through the organic material amendments was the isoprenoid, pristane, a compound which until recently was thought to be nonbiodegradable and was used as an inert biomarker in oil degradation studies. The V2 life of pristane was reduced from 533 days in the nutrient amended control to 49.5 and 19.5 days in DMSBP and SBG amended soils. These results indicate that the addition o f the DMSBP and SBG to diesel contaminated soil stimulated diesel biodégradation, probably by enhancing the indigenous diesel degrading microbial population to degrade diesel hydrocarbons, whilst the addition o f biosurfactants had no enhanced effect on the bioremediation process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis is to compare and contrast environmental licensing systems, for the wood panel industry, in a number of countries in order to determine which system is the best from an environmental and economic point of view. The thesis also examines the impact which government can have on industry and the type of licensing system in operation in a country. Initially, the thesis investigates the origins of the various environmental licensing systems which are in operation in Ireland, Scotland, Wales, France, USA and Canada. It then examines the Environmental Agencies which control and supervise industry in these countries. The impact which the type of government (i.e. unitary or federal) in charge in any particular country has on industry and the Regulatory Agency in that country is then described. Most of the mills in the thesis make a product called OSB (Oriented Strand Board) and the manufacturing process is briefly described in order to understand where the various emissions are generated. The main body of the thesis examines a number of environmental parameters which have emission limit values in the licenses examined, although not all of these parameters have emission limit values in all of the licenses. All of these parameters are used as indicators of the potential impact which the mill can have on the environment. They have been set at specific levels by the Environmental Agencies in the individual countries to control the impact of the mill. Following on from this, the two main types of air pollution control equipment (WESPs and RTOs) are described in regard to their function and capabilities. The mill licenses are then presented in the form of results tables which compare air results and water results separately. This is due to the fact that the most significant emission from this type of industry is to air. A matrix system is used to compare the licenses so that the comparison can be as objective as possible. The discussion examines all of the elements previously described and from this it was concluded that the IPC licensing system is the best from an environmental and economic point of view. It is a much more expensive system to operate than the other systems examined, but it is much more comprehensive and looks at the mill as a whole rather than fragmenting it. It was also seen that the type of environmental licensing system which is in place in a country can play a role in the locating of an industry as certain systems were seen to have more stringent standards attached to them. The type of standard in place in a country is in turn influenced by the type of government which is in place in that country.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to evaluate the determinism of the AS-lnterface network and the 3 main families of control systems, which may use it, namely PLC, PC and RTOS. During the course of this study the PROFIBUS and Ethernet field level networks were also considered in order to ensure that they would not introduce unacceptable latencies into the overall control system. This research demonstrated that an incorrectly configured Ethernet network introduces unacceptable variable duration latencies into the control system, thus care must be exercised if the determinism of a control system is not to be compromised. This study introduces a new concept of using statistics and process capability metrics in the form of CPk values, to specify how suitable a control system is for a given control task. The PLC systems, which were tested, demonstrated extremely deterministic responses, but when a large number of iterations were introduced in the user program, the mean control system latency was much too great for an AS-I network. Thus the PLC was found to be unsuitable for an AS-I network if a large, complex user program Is required. The PC systems, which were tested were non-deterministic and had latencies of variable duration. These latencies became extremely exaggerated when a graphing ActiveX was included in the control application. These PC systems also exhibited a non-normal frequency distribution of control system latencies, and as such are unsuitable for implementation with an AS-I network. The RTOS system, which was tested, overcame the problems identified with the PLC systems and produced an extremely deterministic response, even when a large number of iterations were introduced in the user program. The RTOS system, which was tested, is capable of providing a suitable deterministic control system response, even when an extremely large, complex user program is required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the literature on risk, one generally assume that uncertainty is uniformly distributed over the entire working horizon, when the absolute risk-aversion index is negative and constant. From this perspective, the risk is totally exogenous, and thus independent of endogenous risks. The classic procedure is "myopic" with regard to potential changes in the future behavior of the agent due to inherent random fluctuations of the system. The agent's attitude to risk is rigid. Although often criticized, the most widely used hypothesis for the analysis of economic behavior is risk-neutrality. This borderline case must be envisaged with prudence in a dynamic stochastic context. The traditional measures of risk-aversion are generally too weak for making comparisons between risky situations, given the dynamic �complexity of the environment. This can be highlighted in concrete problems in finance and insurance, context for which the Arrow-Pratt measures (in the small) give ambiguous.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synchronization of data coming from different sources is of high importance in biomechanics to ensure reliable analyses. This synchronization can either be performed through hardware to obtain perfect matching of data, or post-processed digitally. Hardware synchronization can be achieved using trigger cables connecting different devices in many situations; however, this is often impractical, and sometimes impossible in outdoors situations. The aim of this paper is to describe a wireless system for outdoor use, allowing synchronization of different types of - potentially embedded and moving - devices. In this system, each synchronization device is composed of: (i) a GPS receiver (used as time reference), (ii) a radio transmitter, and (iii) a microcontroller. These components are used to provide synchronized trigger signals at the desired frequency to the measurement device connected. The synchronization devices communicate wirelessly, are very lightweight, battery-operated and thus very easy to set up. They are adaptable to every measurement device equipped with either trigger input or recording channel. The accuracy of the system was validated using an oscilloscope. The mean synchronization error was found to be 0.39 μs and pulses are generated with an accuracy of <2 μs. The system provides synchronization accuracy about two orders of magnitude better than commonly used post-processing methods, and does not suffer from any drift in trigger generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eukaryotic cells generate energy in the form of ATP, through a network of mitochondrial complexes and electron carriers known as the oxidative phosphorylation system. In mammals, mitochondrial complex I (CI) is the largest component of this system, comprising 45 different subunits encoded by mitochondrial and nuclear DNA. Humans diagnosed with mutations in the gene NDUFS4, encoding a nuclear DNA-encoded subunit of CI (NADH dehydrogenase ubiquinone Fe-S protein 4), typically suffer from Leigh syndrome, a neurodegenerative disease with onset in infancy or early childhood. Mitochondria from NDUFS4 patients usually lack detectable NDUFS4 protein and show a CI stability/assembly defect. Here, we describe a recessive mouse phenotype caused by the insertion of a transposable element into Ndufs4, identified by a novel combined linkage and expression analysis. Designated Ndufs4(fky), the mutation leads to aberrant transcript splicing and absence of NDUFS4 protein in all tissues tested of homozygous mice. Physical and behavioral symptoms displayed by Ndufs4(fky/fky) mice include temporary fur loss, growth retardation, unsteady gait, and abnormal body posture when suspended by the tail. Analysis of CI in Ndufs4(fky/fky) mice using blue native PAGE revealed the presence of a faster migrating crippled complex. This crippled CI was shown to lack subunits of the "N assembly module", which contains the NADH binding site, but contained two assembly factors not present in intact CI. Metabolomic analysis of the blood by tandem mass spectrometry showed increased hydroxyacylcarnitine species, implying that the CI defect leads to an imbalanced NADH/NAD(+) ratio that inhibits mitochondrial fatty acid β-oxidation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this paper is to re-evaluate the attitude to effort of a risk-averse decision-maker in an evolving environment. In the classic analysis, the space of efforts is generally discretized. More realistic, this new approach emploies a continuum of effort levels. The presence of multiple possible efforts and performance levels provides a better basis for explaining real economic phenomena. The traditional approach (see, Laffont, J. J. & Tirole, J., 1993, Salanie, B., 1997, Laffont, J.J. and Martimort, D, 2002, among others) does not take into account the potential effect of the system dynamics on the agent's behavior to effort over time. In the context of a Principal-agent relationship, not only the incentives of the Principal can determine the private agent to allocate a good effort, but also the evolution of the dynamic system. The incentives can be ineffective when the environment does not incite the agent to invest a good effort. This explains why, some effici

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermal systems interchanging heat and mass by conduction, convection, radiation (solar and thermal ) occur in many engineering applications like energy storage by solar collectors, window glazing in buildings, refrigeration of plastic moulds, air handling units etc. Often these thermal systems are composed of various elements for example a building with wall, windows, rooms, etc. It would be of particular interest to have a modular thermal system which is formed by connecting different modules for the elements, flexibility to use and change models for individual elements, add or remove elements without changing the entire code. A numerical approach to handle the heat transfer and fluid flow in such systems helps in saving the full scale experiment time, cost and also aids optimisation of parameters of the system. In subsequent sections are presented a short summary of the work done until now on the orientation of the thesis in the field of numerical methods for heat transfer and fluid flow applications, the work in process and the future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the analytical model. Our main conclusion is that analytical and computational models are good complements for research in social sciences. Indeed, while on the one hand computational models are extremely useful to extend the scope of the analysis to complex scenar

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The geodynamic forces acting in the Earth's interior manifest themselves in a variety of ways. Volcanoes are amongst the most impressive examples in this respect, but like with an iceberg, they only represent the tip of a more extensive system hidden underground. This system consists of a source region where melt forms and accumulates, feeder connections in which magma is transported towards the surface, and different reservoirs where it is stored before it eventually erupts to form a volcano. A magma represents a mixture of melt and crystals. The latter can be extracted from the source region, or form anywhere along the path towards their final crystallization place. They will retain information of the overall plumbing system. The host rocks of an intrusion, in contrast, provide information at the emplacement level. They record the effects of thermal and mechanical forces imposed by the magma. For a better understanding of the system, both parts - magmatic and metamorphic petrology - have to be integrated. I will demonstrate in my thesis that information from both is complementary. It is an iterative process, using constraints from one field to better constrain the other. Reading the history of the host rocks is not always straightforward. This is shown in chapter two, where a model for the formation of clustered garnets observed in the contact aureole is proposed. Fragments of garnets, older than the intrusive rocks are overgrown by garnet crystallizing due to the reheating during emplacement of the adjacent pluton. The formation of the clusters is therefore not a single event as generally assumed but the result of a two-stage process, namely the alteration of the old grains and the overgrowth and amalgamation of new garnet rims. This makes an important difference when applying petrological methods such as thermobarometry, geochronology or grain size distributions. The thermal conditions in the aureole are a strong function of the emplacement style of the pluton. therefore it is necessary to understand the pluton before drawing conclusions about its aureole. A study investigating the intrusive rocks by means of field, geochemical, geochronologi- cal and structural methods is presented in chapter three. This provided important information about the assembly of the intrusion, but also new insights on the nature of large, homogeneous plutons and the structure of the plumbing system in general. The incremental nature of the emplacement of the Western Adamello tonalité is documented, and the existence of an intermediate reservoir beneath homogeneous plutons is proposed. In chapter four it is demonstrated that information extracted from the host rock provides further constraints on the emplacement process of the intrusion. The temperatures obtain by combining field observations with phase petrology modeling are used together with thermal models to constrain the magmatic activity in the immediate intrusion. Instead of using the thermal models to control the petrology result, the inverse is done. The model parameters were changed until a match with the aureole temperatures was obtained. It is shown, that only a few combinations give a positive match and that temperature estimates from the aureole can constrain the frequency of ancient magmatic systems. In the fifth chapter, the Anisotropy of Magnetic Susceptibility of intrusive rocks is compared to 3D tomography. The obtained signal is a function of the shape and distribution of ferromagnetic grains, and is often used to infer flow directions of magma. It turns out that the signal is dominated by the shape of the magnetic crystals, and where they form tight clusters, also by their distribution. This is in good agreement with the predictions made in the theoretical and experimental literature. In the sixth chapter arguments for partial melting of host rock carbonates are presented. While at first very surprising, this is to be expected when considering the prior results from the intrusive study and experiments from the literature. Partial melting is documented by compelling microstructures, geochemical and structural data. The necessary conditions are far from extreme and this process might be more frequent than previously thought. The carbonate melt is highly mobile and can move along grain boundaries, infiltrating other rocks and ultimately alter the existing mineral assemblage. Finally, a mineralogical curiosity is presented in chapter seven. The mineral assemblage magne§site and calcite is in apparent equilibrium. It is well known that these two carbonates are not stable together in the system Ca0-Mg0-Fe0-C02. Indeed, magnesite and calcite should react to dolomite during metamorphism. The presented explanation for this '"forbidden" assemblage is, that a calcite melt infiltrated the magnesite bearing rock along grain boundaries and caused the peculiar microstructure. This is supported by isotopie disequilibrium between calcite and magnesite. A further implication of partially molten carbonates is, that the host rock drastically looses its strength so that its physical properties may be comparable to the ones of the intrusive rocks. This contrasting behavior of the host rock may ease the emplacement of the intrusion. We see that the circle closes and the iterative process of better constraining the emplacement could start again. - La Terre est en perpétuel mouvement et les forces tectoniques associées à ces mouvements se manifestent sous différentes formes. Les volcans en sont l'un des exemples les plus impressionnants, mais comme les icebergs, les laves émises en surfaces ne représentent que la pointe d'un vaste système caché dans les profondeurs. Ce système est constitué d'une région source, région où la roche source fond et produit le magma ; ce magma peut s'accumuler dans cette région source ou être transporté à travers différents conduits dans des réservoirs où le magma est stocké. Ce magma peut cristalliser in situ et produire des roches plutoniques ou alors être émis en surface. Un magma représente un mélange entre un liquide et des cristaux. Ces cristaux peuvent être extraits de la source ou se former tout au long du chemin jusqu'à l'endroit final de cristallisation. L'étude de ces cristaux peut ainsi donner des informations sur l'ensemble du système magmatique. Au contraire, les roches encaissantes fournissent des informations sur le niveau d'emplacement de l'intrusion. En effet ces roches enregistrent les effets thermiques et mécaniques imposés par le magma. Pour une meilleure compréhension du système, les deux parties, magmatique et métamorphique, doivent être intégrées. Cette thèse a pour but de montrer que les informations issues de l'étude des roches magmatiques et des roches encaissantes sont complémentaires. C'est un processus itératif qui utilise les contraintes d'un domaine pour améliorer la compréhension de l'autre. Comprendre l'histoire des roches encaissantes n'est pas toujours aisé. Ceci est démontré dans le chapitre deux, où un modèle de formation des grenats observés sous forme d'agrégats dans l'auréole de contact est proposé. Des fragments de grenats plus vieux que les roches intru- sives montrent une zone de surcroissance générée par l'apport thermique produit par la mise en place du pluton adjacent. La formation des agrégats de grenats n'est donc pas le résultat d'un seul événement, comme on le décrit habituellement, mais d'un processus en deux phases, soit l'altération de vieux grains engendrant une fracturation de ces grenats, puis la formation de zone de surcroissance autour de ces différents fragments expliquant la texture en agrégats observée. Cette interprétation en deux phases est importante, car elle engendre des différences notables lorsque l'on applique des méthodes pétrologiques comme la thermobarométrie, la géochronologie ou encore lorsque l'on étudie la distribution relative de la taille des grains. Les conditions thermales dans l'auréole de contact dépendent fortement du mode d'emplacement de l'intrusion et c'est pourquoi il est nécessaire de d'abord comprendre le pluton avant de faire des conclusions sur son auréole de contact. Une étude de terrain des roches intrusives ainsi qu'une étude géochimique, géochronologique et structurale est présente dans le troisième chapitre. Cette étude apporte des informations importantes sur la formation de l'intrusion mais également de nouvelles connaissances sur la nature de grands plutons homogènes et la structure de système magmatique en général. L'emplacement incrémental est mis en évidence et l'existence d'un réservoir intermédiaire en-dessous des plutons homogènes est proposé. Le quatrième chapitre de cette thèse illustre comment utiliser l'information extraite des roches encaissantes pour expliquer la mise en place de l'intrusion. Les températures obtenues par la combinaison des observations de terrain et l'assemblage métamorphique sont utilisées avec des modèles thermiques pour contraindre l'activité magmatique au contact directe de cette auréole. Au lieu d'utiliser le modèle thermique pour vérifier le résultat pétrologique, une approche inverse a été choisie. Les paramètres du modèle ont été changés jusqu'à ce qu'on obtienne une correspondance avec les températures observées dans l'auréole de contact. Ceci montre qu'il y a peu de combinaison qui peuvent expliquer les températures et qu'on peut contraindre la fréquence de l'activité magmatique d'un ancien système magmatique de cette manière. Dans le cinquième chapitre, les processus contrôlant l'anisotropie de la susceptibilité magnétique des roches intrusives sont expliqués à l'aide d'images de la distribution des minéraux dans les roches obtenues par tomographie 3D. Le signal associé à l'anisotropie de la susceptibilité magnétique est une fonction de la forme et de la distribution des grains ferromagnétiques. Ce signal est fréquemment utilisé pour déterminer la direction de mouvement d'un magma. En accord avec d'autres études de la littérature, les résultats montrent que le signal est dominé par la forme des cristaux magnétiques, ainsi que par la distribution des agglomérats de ces minéraux dans la roche. Dans le sixième chapitre, une étude associée à la fusion partielle de carbonates dans les roches encaissantes est présentée. Si la présence de liquides carbonatés dans les auréoles de contact a été proposée sur la base d'expériences de laboratoire, notre étude démontre clairement leur existence dans la nature. La fusion partielle est documentée par des microstructures caractéristiques pour la présence de liquides ainsi que par des données géochimiques et structurales. Les conditions nécessaires sont loin d'être extrêmes et ce processus pourrait être plus fréquent qu'attendu. Les liquides carbonatés sont très mobiles et peuvent circuler le long des limites de grain avant d'infiltrer d'autres roches en produisant une modification de leurs assemblages minéralogiques. Finalement, une curiosité minéralogique est présentée dans le chapitre sept. L'assemblage de minéraux de magnésite et de calcite en équilibre apparent est observé. Il est bien connu que ces deux carbonates ne sont pas stables ensemble dans le système CaO-MgO-FeO-CO.,. En effet, la magnésite et la calcite devraient réagir et produire de la dolomite pendant le métamorphisme. L'explication présentée pour cet assemblage à priori « interdit » est que un liquide carbonaté provenant des roches adjacentes infiltre cette roche et est responsable pour cette microstructure. Une autre implication associée à la présence de carbonates fondus est que la roche encaissante montre une diminution drastique de sa résistance et que les propriétés physiques de cette roche deviennent comparables à celles de la roche intrusive. Cette modification des propriétés rhéologiques des roches encaissantes peut faciliter la mise en place des roches intrusives. Ces différentes études démontrent bien le processus itératif utilisé et l'intérêt d'étudier aussi bien les roches intrusives que les roches encaissantes pour la compréhension des mécanismes de mise en place des magmas au sein de la croûte terrestre.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With salaries subjected to scrutiny more than ever, it is increasingly important that the process by which they are determined be understood and justifiable. Both public and private organisations now routinely rely on so-called “job evaluation” as a means of constructing an appropriate pay-scale and as such it is ever more necessary that we appreciate how this system works and that we recognise its limits. Only with such an understanding of the way in which salaries are set can we hope to have a meaningful discussion of their economic function. This paper aims to expound the details of job evaluation both in theory and in practice, and critically assess its shortcomings. In Section 1 below we describe the job evaluation system and in Section 2 we briefly outline the history and the usage of the system in both the private and the public sector. In Section 3 we theoretically analyse the often unstated but nonetheless implicit assumptions made by practitioners of the art of job evaluation. Section 4 applies the analysis of Section 3 to review a particular and important case study, namely The Senior Salaries Review of the Welsh Assembly 2004. Section 5 concludes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper shows how one of the developers of QWERTY continued to use the trade secret that underlay its development to seek further efficiency improvements after its introduction. It provides further evidence that this was the principle used to design QWERTY in the first place and adds further weight to arguments that QWERTY itself was a consequence of creative design and an integral part of a highly efficient system rather than an accident of history. This further serves to raise questions over QWERTY's forced servitude as 'paradigm case' of inferior standard in the path dependence literature. The paper also shows how complementarities in forms of intellectual property rights protection played integral roles in the development of QWERTY and the search for improvements on it, and also helped effectively conceal the source of the efficiency advantages that QWERTY helped deliver.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In cerebral ischemic preconditioning (IPC), a first sublethal ischemia increases the resistance of neurons to a subsequent severe ischemia. Despite numerous studies, the mechanisms are not yet fully understood. Our goal is to develop an in vitro model of IPC on hippocampal organotypic slice cultures. Instead of anoxia, we chose to apply varying degrees of hypoxia that allows us various levels of insult graded from mild to severe. Cultures are exposed to combined oxygen and glucose deprivation (OGD) of varying intensities, ranging from mild to severe, assessing both the electrical activity and cell death. IPC was accomplished by exposure to the mildest ischemia condition (10% of O2 for 15 min) 24 h before the severe deprivation (5% of O2 for 30 min). Interestingly, IPC not only prevented delayed ischemic cell death 6 days after insult but also the transient loss of evoked potential response. The major interest and advantage of this system over both the acute slice preparation and primary cell cultures is the ability to simultaneously measure the delayed neuronal damage and neuronal function.