692 resultados para Invention


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fatigue and sleepiness are major causes of road traffic accidents. However, precise data is often lacking because a validated and reliable device for detecting the level of sleepiness (cf. the breathalyzer for alcohol levels) does not exist, nor does criteria for the unambiguous detection of fatigue/sleepiness as a contributing factor in accident causation. Therefore, identification of risk factors and groups might not always be easy. Furthermore, it is extremely difficult to incorporate fatigue in operationalized terms into either traffic or criminal law. The main aims of this thesis were to estimate the prevalence of fatigue problems while driving among the Finnish driving population, to explore how VALT multidisciplinary investigation teams, Finnish police, and courts recognize (and prosecute) fatigue in traffic, to identify risk factors and groups, and finally to explore the application of the Finnish Road Traffic Act (RTA), which explicitly forbids driving while tired in Article 63. Several different sources of data were used: a computerized database and the original folders of multidisciplinary teams investigating fatal accidents (VALT), the driver records database (AKE), prosecutor and court decisions, a survey of young male military conscripts, and a survey of a representative sample of the Finnish active driving population. The results show that 8-15% of fatal accidents during 1991-2001 were fatigue related, that every fifth Finnish driver has fallen asleep while driving at some point during his/her driving career, and that the Finnish police and courts punish on average one driver per day on the basis of fatigued driving (based on the data from the years 2004-2005). The main finding regarding risk factors and risk groups is that during the summer months, especially in the afternoon, the risk of falling asleep while driving is increased. Furthermore, the results indicate that those with a higher risk of falling asleep while driving are men in general, but especially young male drivers including military conscripts and the elderly during the afternoon hours and the summer in particular; professional drivers breaking the rules about duty and rest hours; and drivers with a tendency to fall asleep easily. A time-of-day pattern of sleep-related incidents was repeatedly found. It was found that VALT teams can be considered relatively reliable when assessing the role of fatigue and sleepiness in accident causation; thus, similar experts might be valuable in the court process as expert witnesses when fatigue or sleepiness are suspected to have a role in an accident’s origins. However, the application of Article 63 of the RTA that forbids, among other things, fatigued driving will continue to be an issue that deserves further attention. This should be done in the context of a needed attitude change towards driving while in a state of extreme tiredness (e.g., after being awake for more than 24 hours), which produces performance deterioration comparable to illegal intoxication (BAC around 0.1%). Regarding the well-known interactive effect of increased sleepiness and even small alcohol levels, the relatively high proportion (up to 14.5%) of Finnish drivers owning and using a breathalyzer raises some concern. This concern exists because these drivers are obviously more focused on not breaking the “magic” line of 0.05% BAC than being concerned about driving impairment, which might be much worse than they realize because of the interactive effects of increased sleepiness and even low alcohol consumption. In conclusion, there is no doubt that fatigue and sleepiness problems while driving are common among the Finnish driving population. While we wait for the invention of reliable devices for fatigue/sleepiness detection, we should invest more effort in raising public awareness about the dangerousness of fatigued driving and educate drivers about how to recognize and deal with fatigue and sleepiness when they ultimately occur.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of capacitors for electrical energy storage actually predates the invention of the battery. Alessandro Volta is attributed with the invention of the battery in 1800, where he first describes a battery as an assembly of plates of two different materials (such as copper and zinc) placed in an alternating stack and separated by paper soaked in brine or vinegar [1]. Accordingly, this device was referred to as Volta’s pile and formed the basis of subsequent revolutionary research and discoveries on the chemical origin of electricity. Before the advent of Volta’s pile, however, eighteenth century researchers relied on the use of Leyden jars as a source of electrical energy. Built in the mid-1700s at the University of Leyden in Holland, a Leyden jar is an early capacitor consisting of a glass jar coated inside and outside with a thin layer of silver foil [2, 3]. With the outer foil being grounded, the inner foil could be charged with an electrostatic generator, or a source of static electricity, and could produce a strong electrical discharge from a small and comparatively simple device.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless technologies are continuously evolving. Second generation cellular networks have gained worldwide acceptance. Wireless LANs are commonly deployed in corporations or university campuses, and their diffusion in public hotspots is growing. Third generation cellular systems are yet to affirm everywhere; still, there is an impressive amount of research ongoing for deploying beyond 3G systems. These new wireless technologies combine the characteristics of WLAN based and cellular networks to provide increased bandwidth. The common direction where all the efforts in wireless technologies are headed is towards an IP-based communication. Telephony services have been the killer application for cellular systems; their evolution to packet-switched networks is a natural path. Effective IP telephony signaling protocols, such as the Session Initiation Protocol (SIP) and the H 323 protocol are needed to establish IP-based telephony sessions. However, IP telephony is just one service example of IP-based communication. IP-based multimedia sessions are expected to become popular and offer a wider range of communication capabilities than pure telephony. In order to conjoin the advances of the future wireless technologies with the potential of IP-based multimedia communication, the next step would be to obtain ubiquitous communication capabilities. According to this vision, people must be able to communicate also when no support from an infrastructured network is available, needed or desired. In order to achieve ubiquitous communication, end devices must integrate all the capabilities necessary for IP-based distributed and decentralized communication. Such capabilities are currently missing. For example, it is not possible to utilize native IP telephony signaling protocols in a totally decentralized way. This dissertation presents a solution for deploying the SIP protocol in a decentralized fashion without support of infrastructure servers. The proposed solution is mainly designed to fit the needs of decentralized mobile environments, and can be applied to small scale ad-hoc networks or also bigger networks with hundreds of nodes. A framework allowing discovery of SIP users in ad-hoc networks and the establishment of SIP sessions among them, in a fully distributed and secure way, is described and evaluated. Security support allows ad-hoc users to authenticate the sender of a message, and to verify the integrity of a received message. The distributed session management framework has been extended in order to achieve interoperability with the Internet, and the native Internet applications. With limited extensions to the SIP protocol, we have designed and experimentally validated a SIP gateway allowing SIP signaling between ad-hoc networks with private addressing space and native SIP applications in the Internet. The design is completed by an application level relay that permits instant messaging sessions to be established in heterogeneous environments. The resulting framework constitutes a flexible and effective approach for the pervasive deployment of real time applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I examine the portrayal of Jesus as a friend of toll collectors and sinners in the Third Gospel. I aim at a comprehensive view on the Lukan sinner texts, combining questions of the origin and development of these texts with the questions of Luke s theological message, of how the text functions as literature, and of the social-historical setting(s) behind the texts. Within New Testament scholarship researchers on the historical Jesus mostly still hold that a special mission to toll collectors and sinners was central in Jesus public activity. Within Lukan studies, M. Goulder, J. Kiilunen and D. Neale have claimed that this picture is due to Luke s theological vision and the liberties he took as an author. Their view is disputed by other Lukan scholars. I discuss methods which scholars have used to isolate the typical language of Luke s alleged written sources, or to argue for the source-free creation by Luke himself. I claim that the analysis of Luke s language does not help us to the origin of the Lukan pericopes. I examine the possibility of free creativity on Luke s part in the light of the invention technique used in ancient historiography. Invention was an essential part of all ancient historical writing and therefore quite probably Luke used it, too. Possibly Luke had access to special traditions, but the nature of oral tradition does not allow reconstruction. I analyze Luke 5:1-11; 5:27-32; 7:36-50; 15:1-32; 18:9-14; 19:1-10; 23:39-43. In most of these some underlying special tradition is possible though far from certain. It becomes evident that Luke s reshaping was so thorough that the pericopes as they now stand are decidedly Lukan creations. This is indicated by the characteristic Lukan story-telling style as well as by the strongly unified Lukan theology of the pericopes. Luke s sinners and Pharisees do not fit in the social-historical context of Jesus day. The story-world is one of polarized right and wrong. That Jesus is the Christ, representative of God, is an intrinsic part of the story-world. Luke wrote a theological drama inspired by tradition. He persuaded his audience to identify as (repenting) sinners. Luke's motive was that he saw the sinners in Jesus' company as forerunners of Gentile Christianity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is an assessment of the hoax hypothesis, mainly propagated in Stephen C. Carlson's 2005 monograph "The Gospel Hoax: Morton Smith's Invention of Secret Mark", which suggests that professor Morton Smith (1915-1991) forged Clement of Alexandria's letter to Theodore. This letter Smith claimed to have discovered as an 18th century copy in the monastery of Mar Saba in 1958. The Introduction narrates the discovery story of Morton Smith and traces the manuscript's whereabouts up to its apparent disappearance in 1990 following with a brief history of scholarship of the MS and some methodological considerations. Chapters 2 and 3 deal with the arguments for the hoax (mainly by Stephen C. Carlson) and against it (mainly Scott G. Brown). Chapter 2 looks at the MS in its physical aspects, and chapter 3 assesses its subject matter. I conclude that some of the details fit reasonably well with the hoax hypothesis, but on the whole the arguments against it are more persuasive. Especially Carlson's use of QDE-analysis (Questioned Document Examination) has many problems. Comparing the handwriting of Clement's letter to Morton Smith's handwriting I conclude that there are some "repeated differences" between them suggesting that Smith is not the writer of the disputed letter. Clement's letter to Theodore derives most likely from antiquity though the exact details of its character are not discussed in length in this thesis. In Chapter 4 I take a special look at Stephen C. Carlson's arguments which propose that Morton Smith hid clues of his identity to the MS and the materials surrounding it. Comparing these alleged clues to known pseudoscientific works I conclude that Carlson utilizes here methods normally reserved for building a conspiracy theory; thus Carlson's hoax hypothesis has serious methodological flaws in respect to these hidden clues. I construct a model of these questionable methods titled "a boisterous pseudohistorical method" that contains three parts: 1) beginning with a question that from the beginning implicitly contains the answer, 2) considering everything will do as evidence for the conspiracy theory, and 3) abandoning probability and thinking literally that everything is connected. I propose that Stephen C. Carlson utilizes these pseudoscientific methods in his unearthing of Morton Smith's "clues". Chapter 5 looks briefly at the literary genre I title "textual puzzle -thriller". Because even biblical scholarship follows the signs of the times, I propose Carlson's hoax hypothesis has its literary equivalents in fiction in titles like Dan Brown's "Da Vinci Code" and in academic works in titles like John Dart's "Decoding Mark". All of these are interested in solving textual puzzles, even though the methodological choices are not acceptable for scholarship. Thus the hoax hypothesis as a whole is alternatively either unpersuasive or plain bad science.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanotechnology is a new technology which is generating a lot of interest among academicians, practitioners and scientists. Critical research is being carried out in this area all over the world.Governments are creating policy initiatives to promote developments it the nanoscale science and technology developments. Private investment is also seeing a rising trend. Large number of academic institutions and national laboratories has set up research centers that are workingon the multiple applications of nanotechnology. Wide ranges of applications are claimed for nanotechnology. This consists of materials, chemicals, textiles, semiconductors, to wonder drug delivery systems and diagnostics. Nanotechnology is considered to be a next big wave of technology after information technology and biotechnology. In fact, nanotechnology holds the promise of advances that exceed those achieved in recent decades in computers and biotechnology. Much interest in nanotechnology also could be because of the fact that enormous monetary benefits are expected from nanotechnology based products. According to NSF, revenues from nanotechnology could touch $ 1 trillion by 2015. However much of the benefits are projected ones. Realizing claimed benefits require successful development of nanoscience andv nanotechnology research efforts. That is the journey of invention to innovation has to be completed. For this to happen the technology has to flow from laboratory to market. Nanoscience and nanotechnology research efforts have to come out in the form of new products, new processes, and new platforms.India has also started its Nanoscience and Nanotechnology development program in under its 10(th) Five Year Plan and funds worth Rs. One billion have been allocated for Nanoscience and Nanotechnology Research and Development. The aim of the paper is to assess Nanoscience and Nanotechnology initiatives in India. We propose a conceptual model derived from theresource based view of the innovation. We have developed a structured questionnaire to measure the constructs in the conceptual model. Responses have been collected from 115 scientists and engineers working in the field of Nanoscience and Nanotechnology. The responses have been analyzed further by using Principal Component Analysis, Cluster Analysis and Regression Analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Embryonic stem cells offer potentially a ground-breaking insight into health and diseases and are said to offer hope in discovering cures for many ailments unimaginable few years ago. Human embryonic stem cells are undifferentiated, immature cells that possess an amazing ability to develop into almost any body cell such as heart muscle, bone, nerve and blood cells and possibly even organs in due course. This remarkable feature, enabling embryonic stem cells to proliferate indefinitely in vitro (in a test tube), has branded them as a so-called miracle cure . Their potential use in clinical applications provides hope to many sufferers of debilitating and fatal medical conditions. However, the emergence of stem cell research has resulted in intense debates about its promises and dangers. On the one hand, advocates hail its potential, ranging from alleviating and even curing fatal and debilitating diseases such as Parkinson s, diabetes, heart ailments and so forth. On the other hand, opponents decry its dangers, drawing attention to the inherent risks of human embryo destruction, cloning for research purposes and reproductive cloning eventually. Lately, however, the policy battles surrounding human embryonic stem cell innovation have shifted from being a controversial research to scuffles within intellectual property rights. In fact, the ability to obtain patents represents a pivotal factor in the economic success or failure of this new biotechnology. Although, stem cell patents tend to more or less satisfy the standard patentability requirements, they also raise serious ethical and moral questions about the meaning of the exclusions on ethical or moral grounds as found in European and to an extent American and Australian patent laws. At present there is a sort of a calamity over human embryonic stem cell patents in Europe and to an extent in Australia and the United States. This in turn has created a sense of urgency to engage all relevant parties in the discourse on how best to approach patenting of this new form of scientific innovation. In essence, this should become a highly favoured patenting priority. To the contrary, stem cell innovation and its reliance on patent protection risk turmoil, uncertainty, confusion and even a halt on not only stem cell research but also further emerging biotechnology research and development. The patent system is premised upon the fundamental principle of balance which ought to ensure that the temporary monopoly awarded to the inventor equals that of the social benefit provided by the disclosure of the invention. Ensuring and maintaining this balance within the patent system when patenting human embryonic stem cells is of crucial contemporary relevance. Yet, the patenting of human embryonic stem cells raises some fundamental moral, social and legal questions. Overall, the present approach of patenting human embryonic stem cell related inventions is unsatisfactory and ineffective. This draws attention to a specific question which provides for a conceptual framework for this work. That question is the following: how can the investigated patent offices successfully deal with patentability of human embryonic stem cells? This in turn points at the thorny issue of application of the morality clause in this field. In particular, the interpretation of the exclusions on ethical or moral grounds as found in Australian, American and European legislative and judicial precedents. The Thesis seeks to compare laws and legal practices surrounding patentability of human embryonic stem cells in Australia and the United States with that of Europe. By using Europe as the primary case study for lessons and guidance, the central goal of the Thesis then becomes the determination of the type of solutions available to Europe with prospects to apply such to Australia and the United States. The Dissertation purports to define the ethical implications that arise with patenting human embryonic stem cells and intends to offer resolutions to the key ethical dilemmas surrounding patentability of human embryonic stem cells and other morally controversial biotechnology inventions. In particular, the Thesis goal is to propose a functional framework that may be used as a benchmark for an informed discussion on the solution to resolving ethical and legal tensions that come with patentability of human embryonic stem cells in Australian, American and European patent worlds. Key research questions that arise from these objectives and which continuously thread throughout the monograph are: 1. How do common law countries such as Australia and the United States approach and deal with patentability of human embryonic stem cells in their jurisdictions? These practices are then compared to the situation in Europe as represented by the United Kingdom (first two chapters), the Court of Justice of the European Union and the European Patent Office decisions (Chapter 3 onwards) in order to obtain a full picture of the present patenting procedures on the European soil. 2. How are ethical and moral considerations taken into account at patent offices investigated when assessing patentability of human embryonic stem cell related inventions? In order to assess this part, the Thesis evaluates how ethical issues that arise with patent applications are dealt with by: a) Legislative history of the modern patent system from its inception in 15th Century England to present day patent laws. b) Australian, American and European patent offices presently and in the past, including other relevant legal precedents on the subject matter. c) Normative ethical theories. d) The notion of human dignity used as the lowest common denominator for the interpretation of the European morality clause. 3. Given the existence of the morality clause in form of Article 6(1) of the Directive 98/44/EC of the European Parliament and of the Council of 6 July 1998 on the legal protection of biotechnological inventions which corresponds to Article 53(a) European Patent Convention, a special emphasis is put on Europe as a guiding principle for Australia and the United States. Any room for improvement of the European morality clause and Europe s current manner of evaluating ethical tensions surrounding human embryonic stem cell inventions is examined. 4. A summary of options (as represented by Australia, the United States and Europe) available as a basis for the optimal examination procedure of human embryonic stem cell inventions is depicted, whereas the best of such alternatives is deduced in order to create a benchmark framework. This framework is then utilised on and promoted as a tool to assist Europe (as represented by the European Patent Office) in examining human embryonic stem cell patent applications. This method suggests a possibility of implementing an institution solution. 5. Ultimately, a question of whether such reformed European patent system can be used as a founding stone for a potential patent reform in Australia and the United States when examining human embryonic stem cells or other morally controversial inventions is surveyed. The author wishes to emphasise that the guiding thought while carrying out this work is to convey the significance of identifying, analysing and clarifying the ethical tensions surrounding patenting human embryonic stem cells and ultimately present a solution that adequately assesses patentability of human embryonic stem cell inventions and related biotechnologies. In answering the key questions above, the Thesis strives to contribute to the broader stem cell debate about how and to which extent ethical and social positions should be integrated into the patenting procedure in pluralistic and morally divided democracies of Europe and subsequently Australia and the United States.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simulation is an important means of evaluating new microarchitectures. With the invention of multi-core (CMP) platforms, simulators are becoming larger and more complex. However, with the availability of CMPs with larger caches and higher operating frequency, the wall clock time required for simulating an application has become comparatively shorter. Reducing this simulation time further is a great challenge, especially in the case of multi-threaded workload due to indeterminacy introduced due to simultaneously executing various threads. In this paper, we propose a technique for speeding multi-core simulation. The model of the processor core and cache are replaced with functional models, to achieve speedup. A timed Petri net model is used to estimate the execution time of the processor and the memory access latencies are estimated using hit/miss information obtained from the functional model of the cache. This model can be used to predict performance of data parallel applications or multiprogramming workload on CMP platform with various cache hierarchies and shared bus interconnect. The error in estimation of the execution time of an application is within 6%. The speedup achieved ranges between an average of 2x--4x over the cycle accurate simulator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work reported in this thesis is an attempt to enhance heat transfer in electronic devices with the use of impinging air jets on pin-finned heat sinks. The cooling per-formance of electronic devices has attracted increased attention owing to the demand of compact size, higher power densities and demands on system performance and re-liability. Although the technology of cooling has greatly advanced, the main cause of malfunction of the electronic devices remains overheating. The problem arises due to restriction of space and also due to high heat dissipation rates, which have increased from a fraction of a W/cm2to 100s of W /cm2. Although several researchers have at-tempted to address this at the design stage, unfortunately the speed of invention of cooling mechanism has not kept pace with the ever-increasing requirement of heat re- moval from electronic chips. As a result, efficient cooling of electronic chip remains a challenge in thermal engineering. Heat transfer can be enhanced by several ways like air cooling, liquid cooling, phase change cooling etc. However, in certain applications due to limitations on cost and weight, eg. air borne application, air cooling is imperative. The heat transfer can be increased by two ways. First, increasing the heat transfer coefficient (forced convec- tion), and second, increasing the surface area of heat transfer (finned heat sinks). From previous literature it was established that for a given volumetric air flow rate, jet im-pingement is the best option for enhancing heat transfer coefficient and for a given volume of heat sink material pin-finned heat sinks are the best option because of their high surface area to volume ratio. There are certain applications where very high jet velocities cannot be used because of limitations of noise and presence of delicate components. This process can further be improved by pulsating the jet. A steady jet often stabilizes the boundary layer on the surface to be cooled. Enhancement in the convective heat transfer can be achieved if the boundary layer is broken. Disruptions in the boundary layer can be caused by pulsating the impinging jet, i.e., making the jet unsteady. Besides, the pulsations lead to chaotic mixing, i.e., the fluid particles no more follow well defined streamlines but move unpredictably through the stagnation region. Thus the flow mimics turbulence at low Reynolds number. The pulsation should be done in such a way that the boundary layer can be disturbed periodically and yet adequate coolant is made available. So, that there is not much variation in temperature during one pulse cycle. From previous literature it was found that square waveform is most effective in enhancing heat transfer. In the present study the combined effect of pin-finned heat sink and impinging slot jet, both steady and unsteady, has been investigated for both laminar and turbulent flows. The effect of fin height and height of impingement has been studied. The jets have been pulsated in square waveform to study the effect of frequency and duty cycle. This thesis attempts to increase our understanding of the slot jet impingement on pin-finned heat sinks through numerical investigations. A systematic study is carried out using the finite-volume code FLUENT (Version 6.2) to solve the thermal and flow fields. The standard k-ε model for turbulence equations and two layer zonal model in wall function are used in the problem Pressure-velocity coupling is handled using the SIMPLE algorithm with a staggered grid. The parameters that affect the heat transfer coefficient are: height of the fins, total height of impingement, jet exit Reynolds number, frequency of the jet and duty cycle (percentage time the jet is flowing during one complete cycle of the pulse). From the studies carried out it was found that: a) beyond a certain height of the fin the rate of enhancement of heat transfer becomes very low with further increase in height, b) the heat transfer enhancement is much more sensitive to any changes at low Reynolds number than compared to high Reynolds number, c) for a given total height of impingement the use of fins and pulsated jet, increases the effective heat transfer coefficient by almost 200% for the same average Reynolds number, d) for all the cases it was observed that the optimum frequency of impingement is around 50 − 100 Hz and optimum duty cycle around 25-33.33%, e) in the case of turbulent jets the enhancement in heat transfer due to pulsations is very less compared to the enhancement in case of laminar jets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Environment-friendly management of fruit flies involving pheromones is useful in reducing the undesirable pest populations responsible for decreasing the yield and the crop quality. A nanogel has been prepared from a pheromone, methyl eugenol (ME) using a low-molecular mass gelator. This was very stable at open ambient conditions and slowed down the evaporation of pheromone significantly. This enabled its easy handling and transportation without refrigeration, and reduction in the frequency of pheromone recharging in the orchard. Notably the involvement of the nano-gelled pheromone brought about an effective management of Bactrocera dorsalis, a prevalent harmful pest for a number of fruits including guava. Thus a simple, practical and low cost green chemical approach is developed that has a significant potential for crop protection, long lasting residual activity, excellent efficacy and favorable safety profiles. This makes the present invention well-suited for pest management in a variety of crops.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Empirical research available on technology transfer initiatives is either North American or European. Literature over the last two decades shows various research objectives such as identifying the variables to be measured and statistical methods to be used in the context of studying university based technology transfer initiatives. AUTM survey data from years 1996 to 2008 provides insightful patterns about the North American technology transfer initiatives, we use this data in our paper. This paper has three sections namely, a comparison of North American Universities with (n=1129) and without Medical Schools (n=786), an analysis of the top 75th percentile of these samples and a DEA analysis of these samples. We use 20 variables. Researchers have attempted to classify university based technology transfer initiative variables into multi-stages, namely, disclosures, patents and license agreements. Using the same approach, however with minor variations, three stages are defined in this paper. The first stage is to do with inputs from R&D expenditure and outputs namely, invention disclosures. The second stage is to do with invention disclosures being the input and patents issued being the output. The third stage is to do with patents issued as an input and technology transfers as outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unmet clinical needs remain the primary driving force for innovations in medical devices. While appropriate mechanisms to protect these innovative outcomes are essential, the performance of clinical trials to ensure safety is also mandated before the invention is ready for public use. Literature explaining the relationship between patenting activities and clinical trials of medical devices is scarce. Linking patent ownership to clinical trials may imply product leadership and value chain control. In this paper, we use patent data from Indian Patent Office (IPO), PCT, and data from Clinical Trials Registry of India (CTRI) to identify whether patent assignees have any role in leading as primary sponsors of clinical trials. A total of 42 primary sponsors are identified from the CTRI database in India. Number of patents awarded to these primary sponsors in the particular medical device, total number of patents awarded to the primary sponsor in all technologies, total number of patents in the specific medical device technology provides an indication of leadership and control in the value chain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Número monográfico: El viaje y sus discursos

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[Es]Este trabajo forma parte de una investigación empírica que aplica la metodología observacional a la incipiente actividad constructiva propia de la psicomotricidad espontánea a los dos años de edad. Es un estudio idiográfico, el diseño observacional utilizado es nomotético, de seguimiento y multidimensional. El instrumento de observación desarrollado ad hoc para el registro de la conducta conscontructiva es el formato de campo “la construcción en psicomotricidad durante el tercer año de vida”. La fiabilidad del instrumento se establece a partir del grado de concordancia entre los observadores. Los resultados, obtenidos mediante el análisis de coocurrencias, informan sobre las condiciones, modalidades, tendencias, evolución y niveles de acción que estas primeras conductas constructivas despliegan. Esta actividad de construcción, de acuerdo con la teoría psicogenética walloniana, puede ser considerada invención de nuevas conductas adecuadas a las nuevas situaciones. Su suspensión da lugar a símbolos enactivos, precursores del juego simbólico infantil.