631 resultados para Limitations
Resumo:
Purpose – The purpose of this paper is to develop a conceptual framework that can be used to identify capabilities needed in the management of infrastructure assets. Design/methodology/approach – This paper utilises a qualitative approach to analyse secondary data in order to develop a conceptual framework that identifies capabilities for strategic infrastructure asset management. Findings – In an external business environment that is undergoing rapid change, it is more appropriate to focus on factors internal to the organisation such as resources and capabilities as a basis to develop competitive advantage. However, there is currently very little understanding of the internal capabilities that are appropriate for infrastructure asset management. Therefore, a conceptual framework is needful to guide infrastructure organisations in the identification of capabilities. Research limitations/implications – This is a conceptual paper and future empirical research should be conducted to validate the propositions made in the paper. Practical implications – The paper clearly argues the need for infrastructure organisations to adopt a systematic approach to identifying the capabilities needed in the management of strategic infrastructure assets. The discussion on the impact of essential capabilities is useful in providing the impetus for managers who operate in a deregulated infrastructure business landscape to review their existing strategies. Originality/value – The paper provides a new perspective on how asset managers can create value for their organisations by investing in the relevant capabilities.
Resumo:
As Web searching becomes more prolific for information access worldwide, we need to better understand users’ Web searching behaviour and develop better models of their interaction with Web search systems. Web search modelling is a significant and important area of Web research. Searching on the Web is an integral element of information behaviour and human–computer interaction. Web searching includes multitasking processes, the allocation of cognitive resources among several tasks, and shifts in cognitive, problem and knowledge states. In addition to multitasking, cognitive coordination and cognitive shifts are also important, but are under-explored aspects of Web searching. During the Web searching process, beyond physical actions, users experience various cognitive activities. Interactive Web searching involves many users’ cognitive shifts at different information behaviour levels. Cognitive coordination allows users to trade off the dependences among multiple information tasks and the resources available. Much research has been conducted into Web searching. However, few studies have modelled the nature of and relationship between multitasking, cognitive coordination and cognitive shifts in the Web search context. Modelling how Web users interact with Web search systems is vital for the development of more effective Web IR systems. This study aims to model the relationship between multitasking, cognitive coordination and cognitive shifts during Web searching. A preliminary theoretical model is presented based on previous studies. The research is designed to validate the preliminary model. Forty-two study participants were involved in the empirical study. A combination of data collection instruments, including pre- and post-questionnaires, think-aloud protocols, search logs, observations and interviews were employed to obtain users’ comprehensive data during Web search interactions. Based on the grounded theory approach, qualitative analysis methods including content analysis and verbal protocol analysis were used to analyse the data. The findings were inferred through an analysis of questionnaires, a transcription of think-aloud protocols, the Web search logs, and notes on observations and interviews. Five key findings emerged. (1) Multitasking during Web searching was demonstrated as a two-dimensional behaviour. The first dimension was represented as multiple information problems searching by task switching. Users’ Web searching behaviour was a process of multiple tasks switching, that is, from searching on one information problem to searching another. The second dimension of multitasking behaviour was represented as an information problem searching within multiple Web search sessions. Users usually conducted Web searching on a complex information problem by submitting multiple queries, using several Web search systems and opening multiple windows/tabs. (2) Cognitive shifts were the brain’s internal response to external stimuli. Cognitive shifts were found as an essential element of searching interactions and users’ Web searching behaviour. The study revealed two kinds of cognitive shifts. The first kind, the holistic shift, included users’ perception on the information problem and overall information evaluation before and after Web searching. The second kind, the state shift, reflected users’ changes in focus between the different cognitive states during the course of Web searching. Cognitive states included users’ focus on the states of topic, strategy, evaluation, view and overview. (3) Three levels of cognitive coordination behaviour were identified: the information task coordination level, the coordination mechanism level, and the strategy coordination level. The three levels of cognitive coordination behaviour interplayed to support multiple information tasks switching. (4) An important relationship existed between multitasking, cognitive coordination and cognitive shifts during Web searching. Cognitive coordination as a management mechanism bound together other cognitive processes, including multitasking and cognitive shifts, in order to move through users’ Web searching process. (5) Web search interaction was shown to be a multitasking process which included information problems ordering, task switching and task and mental coordinating; also, at a deeper level, cognitive shifts took place. Cognitive coordination was the hinge behaviour linking multitasking and cognitive shifts. Without cognitive coordination, neither multitasking Web searching behaviour nor the complicated mental process of cognitive shifting could occur. The preliminary model was revisited with these empirical findings. A revised theoretical model (MCC Model) was built to illustrate the relationship between multitasking, cognitive coordination and cognitive shifts during Web searching. Implications and limitations of the study are also discussed, along with future research work.
Resumo:
Streptococcus pyogenes, also known as Group A Streptococcus (GAS) has been associated with a range of diseases from the mild pharyngitis and pyoderma to more severe invasive infections such as streptococcal toxic shock. GAS also causes a number of non-suppurative post-infectious diseases such as rheumatic fever, rheumatic heart disease and glomerulonephritis. The large extent of GAS disease burden necessitates the need for a prophylactic vaccine that could target the diverse GAS emm types circulating globally. Anti-GAS vaccine strategies have focused primarily on the GAS M-protein, an extracellular virulence factor anchored to GAS cell wall. As opposed to the hypervariable N-terminal region, the C-terminal portion of the protein is highly conserved among different GAS emm types and is the focus of a leading GAS vaccine candidate, J8-DT/alum. The vaccine candidate J8-DT/alum was shown to be immunogenic in mice, rabbits and the non-human primates, hamadryas baboons. Similar responses to J8-DT/alum were observed after subcutaneous and intramuscular immunization with J8-DT/alum, in mice and in rabbits. Further assessment of parameters that may influence the immunogenicity of J8-DT demonstrated that the immune responses were identical in male and female mice and the use of alum as an adjuvant in the vaccine formulation significantly increased its immunogenicity, resulting in a long-lived serum IgG response. Contrary to the previous findings, the data in this thesis indicates that a primary immunization with J8-DT/alum (50ƒÊg) followed by a single boost is sufficient to generate a robust immune response in mice. As expected, the IgG response to J8- DT/alum was a Th2 type response consisting predominantly of the isotype IgG1 accompanied by lower levels of IgG2a. Intramuscular vaccination of rabbits with J8-DT/alum demonstrated that an increase in the dose of J8-DT/alum up to 500ƒÊg does not have an impact on the serum IgG titers achieved. Similar to the immune response in mice, immunization with J8-DT/alum in baboons also established that a 60ƒÊg dose compared to either 30ƒÊg or 120ƒÊg was sufficient to generate a robust immune response. Interestingly, mucosal infection of naive baboons with a M1 GAS strain did not induce a J8-specific serum IgG response. As J8-DT/alum mediated protection has been previously reported to be due to the J8- specific antibody formed, the efficacy of J8-DT antibodies was determined in vitro and in vivo. In vitro opsonization and in vivo passive transfer confirmed the protective potential of J8-DT antibodies. A reduction in the bacterial burden after challenge with a bioluminescent M49 GAS strain in mice that were passively administered J8-DT IgG established that protection due to J8-DT was mediated by antibodies. The GAS burden in infected mice was monitored using bioluminescent imaging in addition to traditional CFU assays. Bioluminescent GAS strains including the ‘rheumatogenic’ M1 GAS could not be generated due to limitations with transformation of GAS, however, a M49 GAS strain was utilized during BLI. The M49 serotype is traditionally a ‘nephritogenic’ serotype associated with post-streptococcal glomerulonephritis. Anti- J8-DT antibodies now have been shown to be protective against multiple GAS strains such as M49 and M1. This study evaluated the immunogenicity of J8-DT/alum in different species of experimental animals in preparation for phase I human clinical trials and provided the ground work for the development of a rapid non-invasive assay for evaluation of vaccine candidates.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
Scientific discoveries, developments in medicine and health issues are the constant focus of media attention and the principles surrounding the creation of so called ‘saviour siblings’ are of no exception. The development in the field of reproductive techniques has provided the ability to genetically analyse embryos created in the laboratory to enable parents to implant selected embryos to create a tissue-matched child who may be able to cure an existing sick child. The research undertaken in this thesis examines the regulatory frameworks overseeing the delivery of assisted reproductive technologies (ART) in Australia and the United Kingdom and considers how those frameworks impact on the accessibility of in vitro fertilisation (IVF) procedures for the creation of ‘saviour siblings’. In some jurisdictions, the accessibility of such techniques is limited by statutory requirements. The limitations and restrictions imposed by the state in relation to the technology are analysed in order to establish whether such restrictions are justified. The analysis is conducted on the basis of a harm framework. The framework seeks to establish whether those affected by the use of the technology (including the child who will be created) are harmed. In order to undertake such evaluation, the concept of harm is considered under the scope of John Stuart Mill’s liberal theory and the Harm Principle is used as a normative tool to judge whether the level of harm that may result, justifies state intervention or restriction with the reproductive decision-making of parents in this context. The harm analysis conducted in this thesis seeks to determine an appropriate regulatory response in relation to the use of pre-implantation tissue-typing for the creation of ‘saviour siblings’. The proposals outlined in the last part of this thesis seek to address the concern that harm may result from the practice of pre-implantation tissue-typing. The current regulatory frameworks in place are also analysed on the basis of the harm framework established in this thesis. The material referred to in this thesis reflects the law and policy in place in Australia and the UK at the time the thesis was submitted for examination (December 2009).
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
In an environment where it has become increasingly difficult to attract consumer attention, marketers have begun to explore alternative forms of marketing communication. One such form that has emerged is product placement, which has more recently appeared in electronic games. Given changes in media consumption and the growth of the games industry, it is not surprising that games are being exploited as a medium for promotional content. Other market developments are also facilitating and encouraging their use, in terms of both the insertion of brand messages into video games and the creation of brand-centred environments, labelled ‘advergames’. However, while there is much speculation concerning the beneficial outcomes for marketers, there remains a lack of academic work in this area and little empirical evidence of the actual effects of this form of promotion on game players. Only a handful of studies are evident in the literature, which have explored the influence of game placements on consumers. The majority have studied their effect on brand awareness, largely demonstrating that players can recall placed brands. Further, most research conducted to date has focused on computer and online games, but consoles represent the dominant platform for play (Taub, 2004). Finally, advergames have largely been neglected, particularly those in a console format. Widening the gap in the literature is the fact that insufficient academic attention has been given to product placement as a marketing communication strategy overall, and to games in general. The unique nature of the strategy also makes it difficult to apply existing literature to this context. To address a significant need for information in both the academic and business domains, the current research investigates the effects of brand and product placements in video games and advergames on consumer attitude to the brand and corporate image. It was conducted in two stages. Stage one represents a pilot study. It explored the effects of use simulated and peripheral placements in video games on players’ and observers’ attitudinal responses, and whether these are influenced by involvement with a product category or skill level in the game. The ability of gamers to recall placed brands was also examined. A laboratory experiment was employed with a small sample of sixty adult subjects drawn from an Australian east-coast university, some of who were exposed to a console video game on a television set. The major finding of study one is that placements in a video game have no effect on gamers’ attitudes, but they are recalled. For stage two of the research, a field experiment was conducted with a large, random sample of 350 student respondents to investigate the effects on players of brand and product placements in handheld video games and advergames. The constructs of brand attitude and corporate image were again tested, along with several potential confounds. Consistent with the pilot, the results demonstrate that product placement in electronic games has no effect on players’ brand attitudes or corporate image, even when allowing for their involvement with the product category, skill level in the game, or skill level in relation to the medium. Age and gender also have no impact. However, the more interactive a player perceives the game to be, the higher their attitude to the placed brand and corporate image of the brand manufacturer. In other words, when controlling for perceived interactivity, players experienced more favourable attitudes, but the effect was so weak it probably lacks practical significance. It is suggested that this result can be explained by the existence of excitation transfer, rather than any processing of placed brands. The current research provides strong, empirical evidence that brand and product placements in games do not produce strong attitudinal responses. It appears that the nature of the game medium, game playing experience and product placement impose constraints on gamer motivation, opportunity and ability to process these messages, thereby precluding their impact on attitude to the brand and corporate image. Since this is the first study to investigate the ability of video game and advergame placements to facilitate these deeper consumer responses, further research across different contexts is warranted. Nevertheless, the findings have important theoretical and managerial implications. This investigation makes a number of valuable contributions. First, it is relevant to current marketing practice and presents findings that can help guide promotional strategy decisions. It also presents a comprehensive review of the games industry and associated activities in the marketplace, relevant for marketing practitioners. Theoretically, it contributes new knowledge concerning product placement, including how it should be defined, its classification within the existing communications framework, its dimensions and effects. This is extended to include brand-centred entertainment. The thesis also presents the most comprehensive analysis available in the literature of how placements appear in games. In the consumer behaviour discipline, the research builds on theory concerning attitude formation, through application of MacInnis and Jaworski’s (1989) Integrative Attitude Formation Model. With regards to the games literature, the thesis provides a structured framework for the comparison of games with different media types; it advances understanding of the game medium, its characteristics and the game playing experience; and provides insight into console and handheld games specifically, as well as interactive environments generally. This study is the first to test the effects of interactivity in a game environment, and presents a modified scale that can be used as part of future research. Methodologically, it addresses the limitations of prior research through execution of a field experiment and observation with a large sample, making this the largest study of product placement in games available in the literature. Finally, the current thesis offers comprehensive recommendations that will provide structure and direction for future study in this important field.
Resumo:
The effective daylighting of multistorey commercial building interiors poses an interesting problem for designers in Australia’s tropical and subtropical context. Given that a building exterior receives adequate sun and skylight as dictated by location-specific factors such as weather, siting and external obstructions; then the availability of daylight throughout its interior is dependant on certain building characteristics: the distance from a window façade (room depth), ceiling or window head height, window size and the visible transmittance of daylighting apertures. The daylighting of general stock, multistorey commercial buildings is made difficult by their design limitations with respect to some of these characteristics. The admission of daylight to these interiors is usually exclusively by vertical windows. Using conventional glazing, such windows can only admit sun and skylight to a depth of approximately 2 times the window height. This penetration depth is typically much less than the depth of the office interiors, so that core areas of these buildings receive little or no daylight. This issue is particularly relevant where deep, open plan office layouts prevail. The resulting interior daylight pattern is a relatively narrow perimeter zone bathed in (sometimes too intense) light, contrasted with a poorly daylit core zone. The broad luminance range this may present to a building occupant’s visual field can be a source of discomfort glare. Furthermore, the need in most tropical and subtropical regions to restrict solar heat gains to building interiors for much of the year has resulted in the widespread use of heavily tinted or reflective glazing on commercial building façades. This strategy reduces the amount of solar radiation admitted to the interior, thereby decreasing daylight levels proportionately throughout. However this technique does little to improve the way light is distributed throughout the office space. Where clear skies dominate weather conditions, at different times of day or year direct sunlight may pass unobstructed through vertical windows causing disability or discomfort glare for building occupants and as such, its admission to an interior must be appropriately controlled. Any daylighting system to be applied to multistorey commercial buildings must consider these design obstacles, and attempt to improve the distribution of daylight throughout these deep, sidelit office spaces without causing glare conditions. The research described in this thesis delineates first the design optimisation and then the actual prototyping and manufacture process of a daylighting device to be applied to such multistorey buildings in tropical and subtropical environments.
Resumo:
Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. The solder joint inspection problem is more challenging than many other visual inspections because of the variability in the appearance of solder joints. Although many research works and various techniques have been developed to classify defect in solder joints, these methods have complex systems of illumination for image acquisition and complicated classification algorithms. An important stage of the analysis is to select the right method for the classification. Better inspection technologies are needed to fill the gap between available inspection capabilities and industry systems. This dissertation aims to provide a solution that can overcome some of the limitations of current inspection techniques. This research proposes two inspection steps for automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localization and segmentation. The illumination normalisation approach can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image. The “back-end” inspection involves the classification of solder joints by using Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. Further testing demonstrates the advantage of Log Gabor filter over both Discrete Wavelet Transform and Discrete Cosine Transform. Classifier score fusion is analysed for improving recognition rate. Experimental results demonstrate that the proposed system improves performance and robustness in terms of classification rates. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. In fact, the choice of suitable features allows one to overcome the problem given by the use of non complex illumination systems. The new system proposed in this research can be incorporated in the development of an automated non-contact, non-destructive and low cost solder joint quality inspection system.
Resumo:
Developmental progression and differentiation of distinct cell types depend on the regulation of gene expression in space and time. Tools that allow spatial and temporal control of gene expression are crucial for the accurate elucidation of gene function. Most systems to manipulate gene expression allow control of only one factor, space or time, and currently available systems that control both temporal and spatial expression of genes have their limitations. We have developed a versatile two-component system that overcomes these limitations, providing reliable, conditional gene activation in restricted tissues or cell types. This system allows conditional tissue-specific ectopic gene expression and provides a tool for conditional cell type- or tissue-specific complementation of mutants. The chimeric transcription factor XVE, in conjunction with Gateway recombination cloning technology, was used to generate a tractable system that can efficiently and faithfully activate target genes in a variety of cell types. Six promoters/enhancers, each with different tissue specificities (including vascular tissue, trichomes, root, and reproductive cell types), were used in activation constructs to generate different expression patterns of XVE. Conditional transactivation of reporter genes was achieved in a predictable, tissue-specific pattern of expression, following the insertion of the activator or the responder T-DNA in a wide variety of positions in the genome. Expression patterns were faithfully replicated in independent transgenic plant lines. Results demonstrate that we can also induce mutant phenotypes using conditional ectopic gene expression. One of these mutant phenotypes could not have been identified using noninducible ectopic gene expression approaches.
Resumo:
This article provides an overview of the concept of vulnerability through the lens of the U.S. federal regulations for the protection of human subjects of research. General issues that emerge for nurse researchers working with regulated vulnerable populations are identified. Points of current controversy in the application of the regulations and current discourse about vulnerable groups are highlighted. Suggestions for negotiating the tension between federally regulated human subject requirements and the realities of research with vulnerable subjects are given. The limitations of the designation of vulnerable as a protection for human subjects will also be discussed.
Resumo:
Purpose – This paper aims to present a novel rapid prototyping (RP) fabrication methods and preliminary characterization for chitosan scaffolds. Design – A desktop rapid prototyping robot dispensing (RPBOD) system has been developed to fabricate scaffolds for tissue engineering (TE) applications. The system is a computer-controlled four-axis machine with a multiple-dispenser head. Neutralization of the acetic acid by the sodium hydroxide results in a precipitate to form a gel-like chitosan strand. The scaffold properties were characterized by scanning electron microscopy, porosity calculation and compression test. An example of fabrication of a freeform hydrogel scaffold is demonstrated. The required geometric data for the freeform scaffold were obtained from CT-scan images and the dispensing path control data were converted form its volume model. The applications of the scaffolds are discussed based on its potential for TE. Findings – It is shown that the RPBOD system can be interfaced with imaging techniques and computational modeling to produce scaffolds which can be customized in overall size and shape allowing tissue-engineered grafts to be tailored to specific applications or even for individual patients. Research limitations/implications – Important challenges for further research are the incorporation of growth factors, as well as cell seeding into the 3D dispensing plotting materials. Improvements regarding the mechanical properties of the scaffolds are also necessary. Originality/value – One of the important aspects of TE is the design scaffolds. For customized TE, it is essential to be able to fabricate 3D scaffolds of various geometric shapes, in order to repair tissue defects. RP or solid free-form fabrication techniques hold great promise for designing 3D customized scaffolds; yet traditional cell-seeding techniques may not provide enough cell mass for larger constructs. This paper presents a novel attempt to fabricate 3D scaffolds, using hydrogels which in the future can be combined with cells.
Resumo:
Ubiquitous access to patient medical records is an important aspect of caring for patient safety. Unavailability of sufficient medical information at the point-ofcare could possibly lead to a fatality. The U.S. Institute of Medicine has reported that between 44,000 and 98,000 people die each year due to medical errors, such as incorrect medication dosages, due to poor legibility in manual records, or delays in consolidating needed information to discern the proper intervention. In this research we propose employing emergent technologies such as Java SIM Cards (JSC), Smart Phones (SP), Next Generation Networks (NGN), Near Field Communications (NFC), Public Key Infrastructure (PKI), and Biometric Identification to develop a secure framework and related protocols for ubiquitous access to Electronic Health Records (EHR). A partial EHR contained within a JSC can be used at the point-of-care in order to help quick diagnosis of a patient’s problems. The full EHR can be accessed from an Electronic Health Records Centre (EHRC) when time and network availability permit. Moreover, this framework and related protocols enable patients to give their explicit consent to a doctor to access their personal medical data, by using their Smart Phone, when the doctor needs to see or update the patient’s medical information during an examination. Also our proposed solution would give the power to patients to modify the Access Control List (ACL) related to their EHRs and view their EHRs through their Smart Phone. Currently, very limited research has been done on using JSCs and similar technologies as a portable repository of EHRs or on the specific security issues that are likely to arise when JSCs are used with ubiquitous access to EHRs. Previous research is concerned with using Medicare cards, a kind of Smart Card, as a repository of medical information at the patient point-of-care. However, this imposes some limitations on the patient’s emergency medical care, including the inability to detect the patient’s location, to call and send information to an emergency room automatically, and to interact with the patient in order to get consent. The aim of our framework and related protocols is to overcome these limitations by taking advantage of the SIM card and the technologies mentioned above. Briefly, our framework and related protocols will offer the full benefits of accessing an up-to-date, precise, and comprehensive medical history of a patient, whilst its mobility will provide ubiquitous access to medical and patient information everywhere it is needed. The objective of our framework and related protocols is to automate interactions between patients, healthcare providers and insurance organisations, increase patient safety, improve quality of care, and reduce the costs.
Resumo:
Nitrous oxide (N2O) is primarily produced by the microbially-mediated nitrification and denitrification processes in soils. It is influenced by a suite of climate (i.e. temperature and rainfall) and soil (physical and chemical) variables, interacting soil and plant nitrogen (N) transformations (either competing or supplying substrates) as well as land management practices. It is not surprising that N2O emissions are highly variable both spatially and temporally. Computer simulation models, which can integrate all of these variables, are required for the complex task of providing quantitative determinations of N2O emissions. Numerous simulation models have been developed to predict N2O production. Each model has its own philosophy in constructing simulation components as well as performance strengths. The models range from those that attempt to comprehensively simulate all soil processes to more empirical approaches requiring minimal input data. These N2O simulation models can be classified into three categories: laboratory, field and regional/global levels. Process-based field-scale N2O simulation models, which simulate whole agroecosystems and can be used to develop N2O mitigation measures, are the most widely used. The current challenge is how to scale up the relatively more robust field-scale model to catchment, regional and national scales. This paper reviews the development history, main construction components, strengths, limitations and applications of N2O emissions models, which have been published in the literature. The three scale levels are considered and the current knowledge gaps and challenges in modelling N2O emissions from soils are discussed.