247 resultados para Lithium composed
Resumo:
BACKGROUND.: Microvascular free tissue transfer has become increasingly popular in the reconstruction of head and neck defects, but it also has its disadvantages. Tissue engineering allows the generation of neo-tissue for implantation, but these tissues are often avascular. We propose to combine tissue-engineering techniques together with flap prefabrication techniques to generate a prefabricated vascularized soft tissue flap. METHODS: Human dermal fibroblasts (HDFs) labeled with fluorescein diacetate were static seeded onto polylactic-co-glycolic acid-collagen (PLGA-c) mesh. Controls were plain PLGA-c mesh. The femoral artery and vein of the nude rat was ligated and used as a vascular carrier for the constructs. After 4 weeks of implantation, the constructs were assessed by gross morphology, routine histology, Masson trichrome, and cell viability determined by green fluorescence. RESULTS: All the constructs maintained their initial shape and dimensions. Angiogenesis was evident in all the constructs with neo-capillary formation within the PLGA-c mesh seen. HDFs proliferated and filled the interyarn spaces of the PLGA-c mesh, while unseeded PLGA-c mesh remained relatively acellular. Cell tracer study indicated that the seeded HDFs remained viable and closely associated to remaining PLGA-c fibers. Collagen formation was more abundant in the constructs seeded with HDFs. CONCLUSIONS: PLGA-c, enveloped by a cell sheet composed of fibroblasts, can serve as a suitable scaffold for generation of a soft tissue flap. A ligated arteriovenous pedicle can serve as a vascular carrier for the generation of a tissue engineered vascularized flap.
Resumo:
Embedded generalized markup, as applied by digital humanists to the recording and studying of our textual cultural heritage, suffers from a number of serious technical drawbacks. As a result of its evolution from early printer control languages, generalized markup can only express a document’s ‘logical’ structure via a repertoire of permissible printed format structures. In addition to the well-researched overlap problem, the embedding of markup codes into texts that never had them when written leads to a number of further difficulties: the inclusion of potentially obsolescent technical and subjective information into texts that are supposed to be archivable for the long term, the manual encoding of information that could be better computed automatically, and the obscuring of the text by highly complex technical data. Many of these problems can be alleviated by asserting a separation between the versions of which many cultural heritage texts are composed, and their content. In this way the complex inter-connections between versions can be handled automatically, leaving only simple markup for individual versions to be handled by the user.
Resumo:
The influence of biogenic particle formation on climate is a well recognised phenomenon. To understand the mechanisms underlying the biogenic particle formation, determining the chemical composition of the new particles and therefore the species that drive the particle production is of utmost importance. Due to the very small amount of mass involved, indirect approaches are frequently used to infer the composition. We present here the results of such an indirect approach by simultaneously measuring volatile and hygroscopic properties of newly formed particles in a forest environment. It is shown that the particles are composed of both sulphates and organics, with the amount of sulphate component strongly depending on the available gas-phase sulphuric acid, and the organic components having the same volatility and hygroscopicity as photooxidation products of a monoterpene such as α-pinene. Our findings agree with a two-step process through nucleation and cluster formation followed by simultaneous growth by condensation of sulphates and organics that take the particles to climatically relevant sizes.
Resumo:
Recent studies have detected a dominant accumulation mode (~100 nm) in the Sea Spray Aerosol (SSA) number distribution. There is evidence to suggest that particles in this mode are composed primarily of organics. To investigate this hypothesis we conducted experiments on NaCl, artificial SSA and natural SSA particles with a Volatility-Hygroscopicity-Tandem-Differential-Mobility-Analyser (VH-TDMA). NaCl particles were atomiser generated and a bubble generator was constructed to produce artificial and natural SSA particles. Natural seawater samples for use in the bubble generator were collected from biologically active, terrestrially-affected coastal water in Moreton Bay, Australia. Differences in the VH-TDMA-measured volatility curves of artificial and natural SSA particles were used to investigate and quantify the organic fraction of natural SSA particles. Hygroscopic Growth Factor (HGF) data, also obtained by the VH-TDMA, were used to confirm the conclusions drawn from the volatility data. Both datasets indicated that the organic fraction of our natural SSA particles evaporated in the VH-TDMA over the temperature range 170–200°C. The organic volume fraction for 71–77 nm natural SSA particles was 8±6%. Organic volume fraction did not vary significantly with varying water residence time (40 secs to 24 hrs) in the bubble generator or SSA particle diameter in the range 38–173 nm. At room temperature we measured shape- and Kelvin-corrected HGF at 90% RH of 2.46±0.02 for NaCl, 2.35±0.02 for artifical SSA and 2.26±0.02 for natural SSA particles. Overall, these results suggest that the natural accumulation mode SSA particles produced in these experiments contained only a minor organic fraction, which had little effect on hygroscopic growth. Our measurement of 8±6% is an order of magnitude below two previous measurements of the organic fraction in SSA particles of comparable sizes. We stress that our results were obtained using coastal seawater and they can’t necessarily be applied on a regional or global ocean scale. Nevertheless, considering the order of magnitude discrepancy between this and previous studies, further research with independent measurement techniques and a variety of different seawaters is required to better quantify how much organic material is present in accumulation mode SSA.
Resumo:
In plant cells, myosin is believed to be the molecular motor responsible for actin-based motility processes such as cytoplasmic streaming and directed vesicle transport. In an effort to characterize plant myosin, a cDNA encoding a myosin heavy chain was isolated from Arabidopsis thaliana. The predicted product of the MYA1 gene is 173 kDa and is structurally similar to the class V myosins. It is composed of the highly-conserved NH2-terminal "head" domain, a putative calmodulin-binding "neck" domain an alpha-helical coiled-coil domain, and a COOH-terminal domain. Northern blot analysis shows that the Arabidopsis MYA1 gene is expressed in all the major plant tissues (flower, leaf, root, and stem). We suggest that the MYA1 myosin may be involved in a general intracellular transport process in plant cells.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
Bean golden mosaic geminivirus (BGMV) has a bipartite genome composed of two circular ssDNA components (DNA-A and DNA-B) and is transmitted by the whitefly, Bemisia tabaci. DNA-A encodes the viral replication proteins and the coat protein. To determine the role of BGMV coat protein systemic infection and whitefly transmission, two deletions and a restriction fragment inversion were introduced into the BGMV coat protein gene. All three coat protein mutants produced systemic infections when coinoculated with DNA-B onto Phaseolus vulgaris using electric discharge particle acceleration "particle gun." However, they were not sap transmissible and coat protein was not detected in mutant-infected plants. In addition, none of the mutants were transmitted by whiteflies. With all three mutants, ssDNA accumulation of DNA-A and DNA-B was reduced 25- to 50-fold and 3- to 10-fold, respectively, as compared to that of wild-type DNA. No effect on dsDNA-A accumulation was detected and there was 2- to 5-fold increase in dsDNA-B accumulation. Recombinants between the mutated DNA-A and DNA-B forms were identified when the inoculated coat protein mutant was linearized in the common region.
Resumo:
This thesis investigates the coefficient of performance (COP) of a hybrid liquid desiccant solar cooling system. This hybrid cooling system includes three sections: 1) conventional air-conditioning section; 2) liquid desiccant dehumidification section and 3) air mixture section. The air handling unit (AHU) with mixture variable air volume design is included in the hybrid cooling system to control humidity. In the combined system, the air is first dehumidified in the dehumidifier and then mixed with ambient air by AHU before entering the evaporator. Experiments using lithium chloride as the liquid desiccant have been carried out for the performance evaluation of the dehumidifier and regenerator. Based on the air mixture (AHU) design, the electrical coefficient of performance (ECOP), thermal coefficient of performance (TCOP) and whole system coefficient of performance (COPsys) models used in the hybrid liquid desiccant solar cooing system were developed to evaluate this system performance. These mathematical models can be used to describe the coefficient of performance trend under different ambient conditions, while also providing a convenient comparison with conventional air conditioning systems. These models provide good explanations about the relationship between the performance predictions of models and ambient air parameters. The simulation results have revealed the coefficient of performance in hybrid liquid desiccant solar cooling systems substantially depends on ambient air and dehumidifier parameters. Also, the liquid desiccant experiments prove that the latent component of the total cooling load requirements can be easily fulfilled by using the liquid desiccant dehumidifier. While cooling requirements can be met, the liquid desiccant system is however still subject to the hysteresis problems.
Resumo:
Mobile sensor platforms such as Autonomous Underwater Vehicles (AUVs) and robotic surface vessels, combined with static moored sensors compose a diverse sensor network that is able to provide macroscopic environmental analysis tool for ocean researchers. Working as a cohesive networked unit, the static buoys are always online, and provide insight as to the time and locations where a federated, mobile robot team should be deployed to effectively perform large scale spatiotemporal sampling on demand. Such a system can provide pertinent in situ measurements to marine biologists whom can then advise policy makers on critical environmental issues. This poster presents recent field deployment activity of AUVs demonstrating the effectiveness of our embedded communication network infrastructure throughout southern California coastal waters. We also report on progress towards real-time, web-streaming data from the multiple sampling locations and mobile sensor platforms. Static monitoring sites included in this presentation detail the network nodes positioned at Redondo Beach and Marina Del Ray. One of the deployed mobile sensors highlighted here are autonomous Slocum gliders. These nodes operate in the open ocean for periods as long as one month. The gliders are connected to the network via a Freewave radio modem network composed of multiple coastal base-stations. This increases the efficiency of deployment missions by reducing operational expenses via reduced reliability on satellite phones for communication, as well as increasing the rate and amount of data that can be transferred. Another mobile sensor platform presented in this study are the autonomous robotic boats. These platforms are utilized for harbor and littoral zone studies, and are capable of performing multi-robot coordination while observing known communication constraints. All of these pieces fit together to present an overview of ongoing collaborative work to develop an autonomous, region-wide, coastal environmental observation and monitoring sensor network.
Resumo:
To date, the majority of films that utilise or feature hip hop music and culture, have either been in the realms of documentary, or in ‘show musicals’ (where the film musical’s device of characters’ bursting into song, is justified by the narrative of a pursuit of a career in the entertainment industry). Thus, most films that feature hip hop expression have in some way been tied to the subject of hip hop. A research interest and enthusiasm was developed for utilising hip hop expression in film in a new way, which would extend the narrative possibilities of hip hop film to wider topics and themes. The creation of the thesis film Out of My Cloud, and the writing of this accompanying exegesis, investigates a research concern of the potential for the use of hip hop expression in an ‘integrated musical’ film (where characters’ break into song without conceit or explanation). Context and rationale for Out of My Cloud (an Australian hip hop ‘integrated musical’ film) is provided in this writing. It is argued that hip hop is particularly suitable for use in a modern narrative film, and particularly in an ‘integrated musical’ film, due to its: current vibrancy and popularity, rap (vocal element of hip hop) music’s focus on lyrical message and meaning, and rap’s use as an everyday, non-performative method of communication. It is also argued that Australian hip hop deserves greater representation in film and literature due to: its current popularity, and its nature as a unique and distinct form of hip hop. To date, representation of Australian hip hop in film and television has almost solely been restricted to the documentary form. Out of My Cloud borrows from elements of social realist cinema such as: contrasts with mainstream cinema, an exploration/recognition of the relationship between environment and development of character, use of non-actors, location-shooting, a political intent of the filmmaker, displaying sympathy for an underclass, representation of underrepresented character types and topics, and a loose narrative structure that does not offer solid resolution. A case is made that it may be appropriate to marry elements of social realist film with hip hop expression due to common characteristics, such as: representation of marginalised or underrepresented groups and issues in society, political objectives of the artist/s, and sympathy for an underclass. In developing and producing Out of My Cloud, a specific method of working with, and filming actor improvisation was developed. This method was informed by improvisation and associated camera techniques of filmmakers such as Charlie Chaplin, Mike Leigh, Khoa Do, Dogme 95 filmmakers, and Lars von Trier (post-Dogme 95). A review of techniques used by these filmmakers is provided in this writing, as well as the impact it has made on my approach. The method utilised in Out of My Cloud was most influenced by Khoa Do’s technique of guiding actors to improvise fairly loosely, but with a predetermined endpoint in mind. A variation of this technique was developed for use in Out of My Cloud, which involved filming with two cameras to allow edits from multiple angles. Specific processes for creating Out of My Cloud are described and explained in this writing. Particular attention is given to the approaches regarding the story elements and the music elements. Various significant aspects of the process are referred to including the filming and recording of live musical performances, the recording of ‘freestyle’ performances (lyrics composed and performed spontaneously) and the creation of a scored musical scene involving a vocal performance without regular timing or rhythm. The documentation of processes in this writing serve to make the successful elements of this film transferable and replicable to other practitioners in the field, whilst flagging missteps to allow fellow practitioners to avoid similar missteps in future projects. While Out of My Cloud is not without its shortcomings as a short film work (for example in the areas of story and camerawork) it provides a significant contribution to the field as a working example of how hip hop may be utilised in an ‘integrated musical’ film, as well as being a rare example of a narrative film that features Australian hip hop. This film and the accompanying exegesis provide insights that contribute to an understanding of techniques, theories and knowledge in the field of filmmaking practice.
Resumo:
Optimal scheduling of voltage regulators (VRs), fixed and switched capacitors and voltage on customer side of transformer (VCT) along with the optimal allocaton of VRs and capacitors are performed using a hybrid optimisation method based on discrete particle swarm optimisation and genetic algorithm. Direct optimisation of the tap position is not appropriate since in general the high voltage (HV) side voltage is not known. Therefore, the tap setting can be determined give the optimal VCT once the HV side voltage is known. The objective function is composed of the distribution line loss cost, the peak power loss cost and capacitors' and VRs' capital, operation and maintenance costs. The constraints are limits on bus voltage and feeder current along with VR taps. The bus voltage should be maintained within the standard level and the feeder current should not exceed the feeder-rated current. The taps are to adjust the output voltage of VRs between 90 and 110% of their input voltages. For validation of the proposed method, the 18-bus IEEE system is used. The results are compared with prior publications to illustrate the benefit of the employed technique. The results also show that the lowest cost planning for voltage profile will be achieved if a combination of capacitors, VRs and VCTs is considered.
Resumo:
An ethylenediamine-assisted route has been designed for one-step synthesis of lithium niobate particles with a novel rodlike structure in an aqueous solution system. The morphological evolution for these lithium niobate rods was monitored via SEM: The raw materials form large lozenges first. These lozenges are a metastable intermediate of this reaction, and they subsequently crack into small rods after sufficiently long time. These small rods recrystallize and finally grow into individual lithium niobate rods. Interestingly, shape-controlled fabrication of lithium niobate powders was achieved through using different amine ligands. For instance, the ethylenediamine or ethanolamine ligan can induce the formation of rods, while n-butylamine prefers to construct hollow spheres. These as-obtained lithium niobate rods and hollow spheres may exhibit enhanced performance in an optical application field due to their distinctive structures. This effective ligand-tuned-morphology route can provide a new strategy to facilely achieve the shape-controlled synthesis of other niobates.
Resumo:
Cu2ZnSnS4 (CZTS) is considered to be one of the most promising light absorbing materials for low cost, high efficiency thin film solar cells. Compared to conventional CuIn(S, Se)2 (CIS) and Cu(InGa)(S,Se)2 (CIGS) as well as CdTe light absorber, CZTS is only composed of earth-abundant non-toxic elements, ensuring the price competitiveness of this kind of solar cell in the future PV market. However, the research in this area is very limited compared to CIS and CIGS. Detailed studies of both the material and the device are rare, which significantly restricts the development in this area. This paper reviews the progress in the research field of CZTS, particularly the methods which were employed to prepare CZTS absorber material.
Resumo:
This review collects and summarises the biological applications of the element cobalt. Small amounts of the ferromagnetic metal can be found in rock, soil, plants and animals, but is mainly obtained as a by-product of nickel and copper mining, and is separated from the ores (mainly cobaltite, erythrite, glaucodot and skutterudite) using a variety of methods. Compounds of cobalt include several oxides, including: green cobalt(II) (CoO), blue cobalt(II,III) (Co3O4), and black cobalt(III) (Co2O3); four halides including pink cobalt(II) fluoride (CoF2), blue cobalt(II) chloride (CoCl2), green cobalt(II) bromide (CoBr2), and blue-black cobalt(II) iodide (CoI2). The main application of cobalt is in its metal form in cobalt-based super alloys, though other uses include lithium cobalt oxide batteries, chemical reaction catalyst, pigments and colouring, and radioisotopes in medicine. It is known to mimic hypoxia on the cellular level by stabilizing the α subunit of hypoxia inducing factor (HIF), when chemically applied as cobalt chloride (CoCl2). This is seen in many biological research applications, where it has shown to promote angiogenesis, erythropoiesis and anaerobic metabolism through the transcriptional activation of genes such as vascular endothelial growth factor (VEGF) and erythropoietin (EPO), contributing significantly to the pathophysiology of major categories of disease, such as myocardial, renal and cerebral ischaemia, high altitude related maladies and bone defects. As a necessary constituent for the formation of vitamin B12, it is essential to all animals, including humans, however excessive exposure can lead to tissue and cellular toxicity. Cobalt has been shown to provide promising potential in clinical applications, however further studies are necessary to clarify its role in hypoxia-responsive genes and the applications of cobalt-chloride treated tissues.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.