990 resultados para key innovations
Resumo:
Strong supramolecular interactions, which induced tight packing and rigid molecules in crystals of cyano substituent oligo(para-phenylene vinylene) (CN-DPDSB), are the key factor for the high luminescence efficiency of its crystals; opposite to its isolated molecules in solution which have very low luminescence efficiency.
Resumo:
RNA interference (RNAi) is an evolutionarily conserved mechanism by which double-stranded RNA (dsRNA) initiates post-transcriptional silencing of homologous genes. Here we report the amplification and characterisation of a full length cDNA from black tiger shrimp (Penaeus monodon) that encodes the bidentate RNAase III Dicer, a key component of the RNAi pathway. The full length of the shrimp Dicer (Pm Dcr1) cDNA is 7629 bp in length, including a 51 untranslated region (UTR) of 130 bp, a 3' UTR of 77 bp, and an open reading frame of 7422 bp encoding a polypeptide of 2473 amino acids with an estimated molecular mass of 277.895 kDa and a predicted isoelectric point of 4.86. Analysis of the deduced amino acid sequence indicated that the mature peptide contains all the seven recognised functional domains and is most similar to the mosquito (Aedes aegypti) Dicer-1 sequence with a similarity of 34.6%. Quantitative RT-PCR analysis showed that Pm Dcr1 mRNA is most highly expressed in haemolymph and lymphoid organ tissues (P 0.05). However, there was no correlation between Pm Dcr1 mRNA levels in lymphoid organ and the viral genetic loads in shrimp naturally infected with gill-associated virus (GAV) and Mourilyan virus (P > 0.05). Treatment with synthetic dsRNA corresponding to Pm Dcr1 sequence resulted in knock-down of Pm Dcr1 mRNA expression in both uninfected shrimp and shrimp infected experimentally with GAV. Knock-down of Pm Dcr1 expression resulted in more rapid mortalities and higher viral loads. These data demonstrated that Dicer is involved in antiviral defence in shrimp. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Research related to carbon geochemistry and biogeochemistry in the East China Sea is reviewed in this paper. The East China Sea is an annual net sink for atmospheric CO, and a large net source of dissolved inorganic carbon to the ocean. The sea absorbs CO, from the atmosphere in spring and summer and releases it in autumn and winter. The East China Sea is a CO, sink in summer because Changjiang River freshwater flows into it. The net average sea-air interface carbon flux of the East China Sea is estimated to be about 4.3 X 10(6) t/y. Vertical carbon transport is mainly in the form of particulate organic carbon in spring; more than 98% of total carbon is transported in this form in surface water, and the number exceeds 68% in water near the bottom. In the southern East China Sea, the average particulate organic carbon inventory was about one-tenth that of the dissolved organic carbon. Research indicates that the southern Okinawa Trough is an important site for particulate organic carbon export from the shelf. The annual cross-shelf exports are estimated to be 414 and 106 Gmol/y for dissolved organic carbon and particulate organic carbon, respectively. Near-bottom transport could be the key process for shelf-to-deep sea export of biogenic and lithogenic particles.
Resumo:
During routine identification of the grasshoppers of the Dasa river, Guizhou Province of China in 2004, a new species [ Oxya guizhouensis sp, nov.] of the genus Oxya Serville ( Orthoptera, Acrididae, Catantopinae) was discovered. It is described here. A key to all known species of the genus from China is given. The type specimens are deposited in the Museum of Hebei University (MHU), Baoding, Hebei, China.
Resumo:
A new species, Atractomorpha taiwanensis sp. n. from Taiwan, China, is described in this paper. The new species is similar to A. micropenna Zheng, 1992, but it differs from the latter by the following: lateral lobe of pronotum without membranous area near posterior margin; the tegmina strongly shortened, not reaching ( in male) the midpoint of hind femur; and wings very small, not reaching the midpoint of tegmina. A key to all known species of the genus Atractomorpha from China is given. The type specimens are deposited in the Museum of Hebei University, China.
Resumo:
A new species Bryodema nigrofrascia of the genus Bryodema Fieber, 1853 (Orthoptera, Acridoidea, Acrididae Oedipodinae) from China is described. A key to known species of the genus is given. The type specimens are deposited in the Northwest Plateau Institute of Biology, Chinese Academy of Sciences, Xining, Qinghai.
Resumo:
Geophysical inversion is a theory that transforms the observation data into corresponding geophysical models. The goal of seismic inversion is not only wave velocity models, but also the fine structures and dynamic process of interior of the earth, expanding to more parameters such as density, aeolotropism, viscosity and so on. As is known to all, Inversion theory is divided to linear and non-linear inversion theories. In rencent 40 years linear inversion theory has formed into a complete and systematic theory and found extensive applications in practice. While there are still many urgent problems to be solved in non-linear inversion theory and practice. Based on wave equation, this dissertation has been mainly involved in the theoretical research of several non-linear inversion methods: waveform inversion, traveltime inversion and the joint inversion about two methods. The objective of gradient waveform inversion is to find a geologic model, thus synthetic seismograms generated by this geologic model are best fitted to observed seismograms. Contrasting with other inverse methods, waveform inversion uses all characteristics of waveform and has high resolution capacity. But waveform inversion is an interface by interface method. An artificial parameter limit should be provided in each inversion iteration. In addition, waveform information will tend to get stuck in local minima if the starting model is too far from the actual model. Based on velocity scanning in traditional seismic data processing, a layer-by-layer waveform inversion method is developed in this dissertation to deal with weaknesses of waveform inversion. Wave equation is used to calculate the traveltime and derivative (perturbation of traveltime with respect to velocity) in wave-equation traveltime inversion (WT). Unlike traditional ray-based travetime inversion, WT has many advantages. No ray tracing or traveltime picking and no high frequency assumption is necessary and good result can be got while starting model is far from real model. But, comparing with waveform inversion, WT has low resolution. Waveform inversion and WT have complementary advantages and similar algorithm, which proves that the joint inversion is a better inversion method. And another key point which this dissertation emphasizes is how to give fullest play to their complementary advantages on the premise of no increase of storage spaces and amount of calculation. Numerical tests are implemented to prove the feasibility of inversion methods mentioned above in this dissertation. Especially for gradient waveform inversion, field data are inversed. This field data are acquired by our group in Wali park and Shunyi district. Real data processing shows there are many problems for waveform inversion to deal with real data. The matching of synthetic seismograms with observed seismograms and noise cancellation are two primary problems. In conclusion, on the foundation of the former experiences, this dissertation has implemented waveform inversions on the basis of acoustic wave equation and elastic wave equation, traveltime inversion on the basis of acoustic wave equation and traditional combined waveform traveltime inversion. Besides the traditional analysis of inversion theory, there are two innovations: layer by layer inversion of seimic reflection data inversion and rapid method for acoustic wave-equation joint inversion.
Resumo:
Based on brief introduction of seismic exploration and it's general developing situation, the seismic exploration method in field work implementation and some problems frequently encountered in field, which should be pay attention to, are analyzed in detail. The most economic field work techniques are emphasized. Then the seismic data processing flow and it's interpretation technique about the processing results are presented. At last four examples of seismic prospecting in gold deposits are showed. The main conclusions of our research are: 1. Seismic prospecting technique is a very efficient method in the prediction of concealed gold deposits. With appropriate application, it can absolutely reflect the detail underground geological structure in the condition of rugged area and complicated geological environment. 2. The field geometry should be designed and changed according to different kinds of objective exploration depth and ground situation. The best field implementing parameters which include offset, the distance between two adjacent traces, the quantity of dynamite and the depth of hole for explosion, should be determined with examination. Only this way, the high quality original seismic data can be gotten. 3. In seismic data processing, the edition of invalid trace and source gather, signal enhancement, velocity analysis and migration are the key steps. It has some different points with conventional processing and needs a new processing flow and methods which is suitable to the data acquired in rugged area and complicated geological environment. 4. The new common reflection area stacking method in crooked line data processing is an efficient method to improve the signal to noise ratio of seismic data The innovations of our research work are: 1. In the areas which were considered to be forbidden zone, we implement the seismic exploration in several gold deposits in China through our application. All acquire distinguished effects. This show the seismic exploration method is a new effective method in the prediction of concealed gold deposits. 2. We developed a set of seismic field work techniques and data processing which is suitable to complex environment, especially find a effective method in stacking and noise elimination in crooked line data processing. 3. In the field of seismic profile interpretation, through our research work, we are convinced of that: in different kinds of geological condition, the seismic reflection character are not same. For example the lava, the intrusion rock and sediment layers are different in the character of reflection structure and strength. So we accumulate some experience about seismic data interpretation in the area of gold deposits.
Resumo:
The dream of pervasive computing is slowly becoming a reality. A number of projects around the world are constantly contributing ideas and solutions that are bound to change the way we interact with our environments and with one another. An essential component of the future is a software infrastructure that is capable of supporting interactions on scales ranging from a single physical space to intercontinental collaborations. Such infrastructure must help applications adapt to very diverse environments and must protect people's privacy and respect their personal preferences. In this paper we indicate a number of limitations present in the software infrastructures proposed so far (including our previous work). We then describe the framework for building an infrastructure that satisfies the abovementioned criteria. This framework hinges on the concepts of delegation, arbitration and high-level service discovery. Components of our own implementation of such an infrastructure are presented.
Resumo:
An investigation in innovation management and entrepreneurial management is conducted in this thesis. The aim of the research is to explore changes of innovation styles in the transformation process from a start-up company to a more mature phase of business, to predict in a second step future sustainability and the probability of success. As businesses grow in revenue, corporate size and functional complexity, various triggers, supporters and drivers affect innovation and company's success. In a comprehensive study more than 200 innovative and technology driven companies have been examined and compared to identify patterns in different performance levels. All of them have been founded under the same formal requirements of the Munich Business Plan Competition -a research approach which allowed a unique snapshot that only long-term studies would be able to provide. The general objective was to identify the correlation between different factors, as well as different dimensions, to incremental and radical innovations realised. The 12 hypothesis were formed to prove have been derived from a comprehensive literature review. The relevant academic and practitioner literature on entrepreneurial, innovation, and knowledge management as well as social network theory revealed that the concept of innovation has evolved significantly over the last decade. A review of over 15 innovation models/frameworks contributed to understand what innovation in context means and what the dimensions are. It appears that the complex theories of innovation can be described by the increasing extent of social ingredients in the explanation of innovativeness. Originally based on tangible forms of capital, and on the necessity of pull and technology push, innovation management is today integrated in a larger system. Therefore, two research instruments have been developed to explore the changes in innovations styles. The Innovation Management Audits (IMA Start-up and IMA Mature) provided statements related to product/service development, innovativeness in various typologies, resources for innovations, innovation capabilities in conjunction to knowledge and management, social networks as well as the measurement of outcomes to generate high-quality data for further exploration. In obtaining results the mature companies have been clustered in the performance level low, average and high, while the start-up companies have been kept as one cluster. Firstly, the analysis exposed that knowledge, the process of acquiring knowledge, interorganisational networks and resources for innovations are the most important driving factors for innovation and success. Secondly, the actual change of the innovation style provides new insights about the importance of focusing on sustaining success and innovation ii 16 key areas. Thirdly, a detailed overview of triggers, supporters and drivers for innovation and success for each dimension support decision makers in putting their company in the right direction. Fourthly, a critical review of contemporary strategic management in conjunction to the findings provides recommendation of how to apply well-known management tools. Last but not least, the Munich cluster is analysed providing an estimation of the success probability of the different performance cluster and start-up companies. For the analysis of the probability of success of the newly developed as well as statistically and qualitative validated ICP Model (Innovativeness, Capabilities & Potential) has been developed and applied. While the model was primarily developed to evaluate the probability of success of companies; it has equal application in the situation to measure innovativeness to identify the impact of various strategic initiatives within small or large enterprises. The main findings of the model are that competitor, and customer orientation and acquiring knowledge important for incremental and radical innovation. Formal and interorganisation networks are important to foster innovation but informal networks appear to be detrimental to innovation. The testing of the ICP model h the long term is recommended as one subject of further research. Another is to investigate some of the more intangible aspects of innovation management such as attitude and motivation of mangers. IV
Resumo:
It is anticipated that constrained devices in the Internet of Things (IoT) will often operate in groups to achieve collective monitoring or management tasks. For sensitive and mission-critical sensing tasks, securing multicast applications is therefore highly desirable. To secure group communications, several group key management protocols have been introduced. However, the majority of the proposed solutions are not adapted to the IoT and its strong processing, storage, and energy constraints. In this context, we introduce a novel decentralized and batch-based group key management protocol to secure multicast communications. Our protocol is simple and it reduces the rekeying overhead triggered by membership changes in dynamic and mobile groups and guarantees both backward and forward secrecy. To assess our protocol, we conduct a detailed analysis with respect to its communcation and storage costs. This analysis is validated through simulation to highlight energy gains. The obtained results show that our protocol outperforms its peers with respect to keying overhead and the mobility of members.
Resumo:
Chris L. Organ, Andrew M. Shedlock, Andrew Meade, Mark Pagel and Scott V. Edwards (2007). Origin of avian genome size and structure in non-avian dinosaurs. Nature, 46(7132), 180-184. RAE2008
Resumo:
Along with the growing demand for cryptosystems in systems ranging from large servers to mobile devices, suitable cryptogrophic protocols for use under certain constraints are becoming more and more important. Constraints such as calculation time, area, efficiency and security, must be considered by the designer. Elliptic curves, since their introduction to public key cryptography in 1985 have challenged established public key and signature generation schemes such as RSA, offering more security per bit. Amongst Elliptic curve based systems, pairing based cryptographies are thoroughly researched and can be used in many public key protocols such as identity based schemes. For hardware implementions of pairing based protocols, all components which calculate operations over Elliptic curves can be considered. Designers of the pairing algorithms must choose calculation blocks and arrange the basic operations carefully so that the implementation can meet the constraints of time and hardware resource area. This thesis deals with different hardware architectures to accelerate the pairing based cryptosystems in the field of characteristic two. Using different top-level architectures the hardware efficiency of operations that run at different times is first considered in this thesis. Security is another important aspect of pairing based cryptography to be considered in practically Side Channel Analysis (SCA) attacks. The naively implemented hardware accelerators for pairing based cryptographies can be vulnerable when taking the physical analysis attacks into consideration. This thesis considered the weaknesses in pairing based public key cryptography and addresses the particular calculations in the systems that are insecure. In this case, countermeasures should be applied to protect the weak link of the implementation to improve and perfect the pairing based algorithms. Some important rules that the designers must obey to improve the security of the cryptosystems are proposed. According to these rules, three countermeasures that protect the pairing based cryptosystems against SCA attacks are applied. The implementations of the countermeasures are presented and their performances are investigated.
Resumo:
The aim of this study is to garner comparative insights so as to aid the development of the discourse on further education (FE) conceptualisation and the relationship of FE with educational disadvantage and employability. This aim is particularly relevant in Irish education parlance amidst the historical ambiguity surrounding the functioning of FE. The study sets out to critically engage with the education/employability/economy link (eee link). This involves a critique of issues relevant to participation (which extends beyond student activity alone to social relations generally and the dialogic participation of the disadvantaged), accountability (which extends beyond performance measures alone to encompass equality of condition towards a socially just end) and human capital (which extends to both collective and individual aspects within an educational culture). As a comparative study, there is a strong focus on providing a way of conceptualising and comparatively analysing FE policy internationally. The study strikes a balance between conceptual and practical concerns. A critical comparative policy analysis is the methodology that structures the study which is informed and progressed by a genealogical method to establish the context of each of the jurisdictions of England, the United States and the European Union. Genealogy allows the use of history to diagnose the present rather than explaining how the past has caused the present. The discussion accentuates the power struggles within education policy practice using what Fairclough calls a strategic critique as well as an ideological critique. The comparative nature of the study means that there is a need to be cognizant of the diverse cultural influences on policy deliberation. The study uses the theoretical concept of paradigmatic change to critically analyse the jurisdictions. To aid with the critical analysis, a conceptual framework for legislative functions is developed so as to provide a metalanguage for educational legislation. The specific contribution of the study, while providing a manner for understanding and progressing FE policy development in a globalized Ireland, is to clear the ground for a more well-defined and critically reflexive FE sector to operate and suggests a number of issues for further deliberation.