944 resultados para Non Ideal System


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Despite its efficacy and cost-effectiveness, exercise-based cardiac rehabilitation is undertaken by less than one-third of clinically eligible cardiac patients in every country for which data is available. Reasons for non-participation include the unavailability of hospital-based rehabilitation programs, or excessive travel time and distance. For this reason, there have been calls for the development of more flexible alternatives. Methodology and Principal Findings We developed a system to enable walking-based cardiac rehabilitation in which the patient's single-lead ECG, heart rate, GPS-based speed and location are transmitted by a programmed smartphone to a secure server for real-time monitoring by a qualified exercise scientist. The feasibility of this approach was evaluated in 134 remotely-monitored exercise assessment and exercise sessions in cardiac patients unable to undertake hospital-based rehabilitation. Completion rates, rates of technical problems, detection of ECG changes, pre- and post-intervention six minute walk test (6 MWT), cardiac depression and Quality of Life (QOL) were key measures. The system was rated as easy and quick to use. It allowed participants to complete six weeks of exercise-based rehabilitation near their homes, worksites, or when travelling. The majority of sessions were completed without any technical problems, although periodic signal loss in areas of poor coverage was an occasional limitation. Several exercise and post-exercise ECG changes were detected. Participants showed improvements comparable to those reported for hospital-based programs, walking significantly further on the post-intervention 6 MWT, 637 m (95% CI: 565–726), than on the pre-test, 524 m (95% CI: 420–655), and reporting significantly reduced levels of cardiac depression and significantly improved physical health-related QOL. Conclusions and Significance The system provided a feasible and very flexible alternative form of supervised cardiac rehabilitation for those unable to access hospital-based programs, with the potential to address a well-recognised deficiency in health care provision in many countries. Future research should assess its longer-term efficacy, cost-effectiveness and safety in larger samples representing the spectrum of cardiac morbidity and severity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The epithelium of the corneolimbus contains stem cells for regenerating the corneal epithelium. Diseases and injuries affecting the limbus can lead to a condition known as limbal stem cell deficiency (LSCD), which results in loss of the corneal epithelium, and subsequent chronic inflammation and scarring of the ocular surface. Advances in the treatment of LSCD have been achieved through use of cultured human limbal epithelial (HLE) grafts to restore epithelial stem cells of the ocular surface. These epithelial grafts are usually produced by the ex vivo expansion of HLE cells on human donor amniotic membrane (AM), but this is not without limitations. Although AM is the most widely accepted substratum for HLE transplantation, donor variation, risk of disease transfer, and rising costs have led to the search for alternative biomaterials to improve the surgical outcome of LSCD. Recent studies have demonstrated that Bombyx mori silk fibroin (hereafter referred to as fibroin) membranes support the growth of primary HLE cells, and thus this thesis aims to explore the possibility of using fibroin as a biomaterial for ocular surface reconstruction. Optimistically, the grafted sheets of cultured epithelium would provide a replenishing source of epithelial progenitor cells for maintaining the corneal epithelium, however, the HLE cells lose their progenitor cell characteristics once removed from their niche. More severe ocular surface injuries, which result in stromal scarring, damage the epithelial stem cell niche, which subsequently leads to poor corneal re-epithelialisation post-grafting. An ideal solution to repairing the corneal limbus would therefore be to grow and transplant HLE cells on a biomaterial that also provides a means for replacing underlying stromal cells required to better simulate the normal stem cell niche. The recent discovery of limbal mesenchymal stromal cells (L-MSC) provides a possibility for stromal repair and regeneration, and therefore, this thesis presents the use of fibroin as a possible biomaterial to support a three dimensional tissue engineered corneolimbus with both an HLE and underlying L-MSC layer. Investigation into optimal scaffold design is necessary, including adequate separation of epithelial and stromal layers, as well as direct cell-cell contact. Firstly, the attachment, morphology and phenotype of HLE cells grown on fibroin were directly compared to that observed on donor AM, the current clinical standard substrate for HLE transplantation. The production, transparency, and permeability of fibroin membranes were also evaluated in this part of the study. Results revealed that fibroin membranes could be routinely produced using a custom-made film casting table and were found to be transparent and permeable. Attachment of HLE cells to fibroin after 4 hours in serum-free medium was similar to that supported by tissue culture plastic but approximately 6-fold less than that observed on AM. While HLE cultured on AM displayed superior stratification, epithelia constructed from HLE on fibroin maintained evidence of corneal phenotype (cytokeratin pair 3/12 expression; CK3/12) and displayed a comparable number and distribution of ÄNp63+ progenitor cells to that seen in cultures grown on AM. These results confirm the suitability of membranes constructed from silk fibroin as a possible substrate for HLE cultivation. One of the most important aspects in corneolimbal tissue engineering is to consider the reconstruction of the limbal stem cell niche to help form the natural limbus in situ. MSC with similar properties to bone marrow derived-MSC (BM-MSC) have recently been grown from the limbus of the human cornea. This thesis evaluated methods for culturing L-MSC and limbal keratocytes using various serum-free media. The phenotype of resulting cultures was examined using photography, flow cytometry for CD34 (keratocyte marker), CD45 (bone marrow-derived cell marker), CD73, CD90, CD105 (collectively MSC markers), CD141 (epithelial/vascular endothelial marker), and CD271 (neuronal marker), immunocytochemistry (alpha-smooth muscle actin; á-sma), differentiation assays (osteogenesis, adipogenesis and chrondrogenesis), and co-culture experiments with HLE cells. While all techniques supported to varying degrees establishment of keratocyte and L-MSC cultures, sustained growth and serial propagation was only achieved in serum-supplemented medium or the MesenCult-XF„¥ culture system (Stem Cell Technologies). Cultures established in MesenCult-XF„¥ grew faster than those grown in serum-supplemented medium and retained a more optimal MSC phenotype. L-MSC cultivated in MesenCult-XFR were also positive for CD141, rarely expressed £\-sma, and displayed multi-potency. L-MSC supported growth of HLE cells, with the largest epithelial islands being observed in the presence of L-MSC established in MesenCult-XF„¥ medium. All HLE cultures supported by L-MSC widely expressed the progenitor cell marker £GNp63, along with the corneal differentiation marker CK3/12. Our findings conclude that MesenCult-XFR is a superior culture system for L-MSC, but further studies are required to explore the significance of CD141 expression in these cells. Following on from the findings of the previous two parts, silk fibroin was tested as a novel dual-layer construct containing both an epithelium and underlying stroma for corneolimbal reconstruction. In this section, the growth and phenotype of HLE cells on non-porous versus porous fibroin membranes was compared. Furthermore, the growth of L-MSC in either serum-supplemented medium or the MesenCult-XFR culture system within fibroin fibrous mats was investigated. Lastly, the co-culture of HLE and L-MSC in serum-supplemented medium on and within fibroin dual-layer constructs was also examined. HLE on porous membranes displayed a flattened and squamous monolayer; in contrast, HLE on non-porous fibroin appeared cuboidal and stratified closer in appearance to a normal corneal epithelium. Both constructs maintained CK3/12 expression and distribution of £GNp63+ progenitor cells. Dual-layer fibroin scaffolds consisting of HLE cells and L-MSC maintained a similar phenotype as on the single layers alone. Overall, the present study proposed to create a three dimensional limbal tissue substitute of HLE cells and L-MSC together, ultimately for safe and beneficial transplantation back into the human eye. The results show that HLE and L-MSC can be cultivated separately and together whilst maintaining a clinically feasible phenotype containing a majority of progenitor cells. In addition, L-MSC were able to be cultivated routinely in the MesenCult-XF® culture system while maintaining a high purity for the MSC characteristic phenotype. However, as a serum-free culture medium was not found to sustain growth of both HLE and L-MSC, the combination scaffold was created in serum-supplemented medium, indicating that further refinement of this cultured limbal scaffold is required. This thesis has also demonstrated a potential novel marker for L-MSC, and has generated knowledge which may impact on the understanding of stromal-epithelial interactions. These results support the feasibility of a dual-layer tissue engineered corneolimbus constructed from silk fibroin, and warrant further studies into the potential benefits it offers to corneolimbal tissue regeneration. Further refinement of this technology should explore the potential benefits of using epithelial-stromal co-cultures with MesenCult-XF® derived L-MSC. Subsequent investigations into the effects of long-term culture on the phenotype and behaviour of the cells in the dual-layer scaffolds are also required. While this project demonstrated the feasibility in vitro for the production of a dual-layer tissue engineered corneolimbus, further studies are required to test the efficacy of the limbal scaffold in vivo. Future in vivo studies are essential to fully understand the integration and degradation of silk fibroin biomaterials in the cornea over time. Subsequent experiments should also investigate the use of both AM and silk fibroin with epithelial and stromal cell co-cultures in an animal model of LSCD. The outcomes of this project have provided a foundation for research into corneolimbal reconstruction using biomaterials and offer a stepping stone for future studies into corneolimbal tissue engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) was one of the first universities in Australia to establish an institutional repository. Launched in November 2003, the repository (QUT ePrints) uses the EPrints open source repository software (from Southampton) and has enjoyed the benefit of an institutional deposit mandate since January 2004. Currently (April 2012), the repository holds over 36,000 records, including 17,909 open access publications with another 2,434 publications embargoed but with mediated access enabled via the ‘Request a copy’ button which is a feature of the EPrints software. At QUT, the repository is managed by the library.QUT ePrints (http://eprints.qut.edu.au) The repository is embedded into a number of other systems at QUT including the staff profile system and the University’s research information system. It has also been integrated into a number of critical processes related to Government reporting and research assessment. Internally, senior research administrators often look to the repository for information to assist with decision-making and planning. While some statistics could be drawn from the advanced search feature and the existing download statistics feature, they were rarely at the level of granularity or aggregation required. Getting the information from the ‘back end’ of the repository was very time-consuming for the Library staff. In 2011, the Library funded a project to enhance the range of statistics which would be available from the public interface of QUT ePrints. The repository team conducted a series of focus groups and individual interviews to identify and prioritise functionality requirements for a new statistics ‘dashboard’. The participants included a mix research administrators, early career researchers and senior researchers. The repository team identified a number of business criteria (eg extensible, support available, skills required etc) and then gave each a weighting. After considering all the known options available, five software packages (IRStats, ePrintsStats, AWStats, BIRT and Google Urchin/Analytics) were thoroughly evaluated against a list of 69 criteria to determine which would be most suitable. The evaluation revealed that IRStats was the best fit for our requirements. It was deemed capable of meeting 21 out of the 31 high priority criteria. Consequently, IRStats was implemented as the basis for QUT ePrints’ new statistics dashboards which were launched in Open Access Week, October 2011. Statistics dashboards are now available at four levels; whole-of-repository level, organisational unit level, individual author level and individual item level. The data available includes, cumulative total deposits, time series deposits, deposits by item type, % fulltexts, % open access, cumulative downloads, time series downloads, downloads by item type, author ranking, paper ranking (by downloads), downloader geographic location, domains, internal v external downloads, citation data (from Scopus and Web of Science), most popular search terms, non-search referring websites. The data is displayed in charts, maps and table format. The new statistics dashboards are a great success. Feedback received from staff and students has been very positive. Individual researchers have said that they have found the information to be very useful when compiling a track record. It is now very easy for senior administrators (including the Deputy Vice Chancellor-Research) to compare the full-text deposit rates (i.e. mandate compliance rates) across organisational units. This has led to increased ‘encouragement’ from Heads of School and Deans in relation to the provision of full-text versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fire safety of light gauge cold-formed steel frame (LSF) wall systems is significant to the build-ing design. Gypsum plasterboard is widely used as a fire safety material in the building industry. It contains gypsum (CaSO4.2H2O), Calcium Carbonate (CaCO3) and most importantly free and chemically bound water in its crystal structure. The dehydration of the gypsum and the decomposition of Calcium Carbonate absorb heat, which gives the gypsum plasterboard fire resistant qualities. Recently a new composite panel system was developed, where a thin insulation layer was used externally between two plasterboards to improve the fire performance of LSF walls. In this research, finite element thermal models of both the traditional LSF wall panels with cavity insulation and the new LSF composite wall panels were developed to simulate their thermal behaviour under standard and realistic design fire conditions. Suitable thermal properties of gypsum plaster-board, insulation materials and steel were used. The developed models were then validated by comparing their results with fire test results. This paper presents the details of the developed finite element models of non-load bearing LSF wall panels and the thermal analysis results. It has shown that finite element models can be used to simulate the thermal behaviour of LSF walls with varying configurations of insulations and plasterboards. The results show that the use of cavity insulation was detrimental to the fire rating of LSF walls while the use of external insulation offered superior thermal protection. Effects of real fire conditions are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a need for an accurate real-time quantitative system that would enhance decision-making in the treatment of osteoarthritis. To achieve this objective, significant research is required that will enable articular cartilage properties to be measured and categorized for health and functionality without the need for laboratory tests involving biopsies for pathological evaluation. Such a system would provide the capability of access to the internal condition of the cartilage matrix and thus extend the vision-based arthroscopy that is currently used beyond the subjective evaluation of surgeons. The system required must be able to non-destructively probe the entire thickness of the cartilage and its immediate subchondral bone layer. In this thesis, near infrared spectroscopy is investigated for the purpose mentioned above. The aim is to relate it to the structure and load bearing properties of the cartilage matrix to the near infrared absorption spectrum and establish functional relationships that will provide objective, quantitative and repeatable categorization of cartilage condition outside the area of visible degradation in a joint. Based on results from traditional mechanical testing, their innovative interpretation and relationship with spectroscopic data, new parameters were developed. These were then evaluated for their consistency in discriminating between healthy viable and degraded cartilage. The mechanical and physico-chemical properties were related to specific regions of the near infrared absorption spectrum that were identified as part of the research conducted for this thesis. The relationships between the tissue's near infrared spectral response and the new parameters were modeled using multivariate statistical techniques based on partial least squares regression (PLSR). With significantly high levels of statistical correlation, the modeled relationships were demonstrated to possess considerable potential in predicting the properties of unknown tissue samples in a quick and non-destructive manner. In order to adapt near infrared spectroscopy for clinical applications, a balance between probe diameter and the number of active transmit-receive optic fibres must be optimized. This was achieved in the course of this research, resulting in an optimal probe configuration that could be adapted for joint tissue evaluation. Furthermore, as a proof-of-concept, a protocol for obtaining the new parameters from the near infrared absorption spectra of cartilage was developed and implemented in a graphical user interface (GUI)-based software, and used to assess cartilage-on-bone samples in vitro. This conceptual implementation has been demonstrated, in part by the individual parametric relationship with the near infrared absorption spectrum, the capacity of the proposed system to facilitate real-time, non-destructive evaluation of cartilage matrix integrity. In summary, the potential of the optical near infrared spectroscopy for evaluating articular cartilage and bone laminate has been demonstrated in this thesis. The approach could have a spin-off for other soft tissues and organs of the body. It builds on the earlier work of the group at QUT, enhancing the near infrared component of the ongoing research on developing a tool for cartilage evaluation that goes beyond visual and subjective methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The security of power transfer across a given transmission link is typically a steady state assessment. This paper develops tools to assess machine angle stability as affected by a combination of faults and uncertainty of wind power using probability analysis. The paper elaborates on the development of the theoretical assessment tool and demonstrates its efficacy using single machine infinite bus system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of large-scale wind farms and their integration into electrical grids, more uncertainties, constraints and objectives must be considered in power system development. It is therefore necessary to introduce risk-control strategies into the planning of transmission systems connected with wind power generators. This paper presents a probability-based multi-objective model equipped with three risk-control strategies. The model is developed to evaluate and enhance the ability of the transmission system to protect against overload risks when wind power is integrated into the power system. The model involves: (i) defining the uncertainties associated with wind power generators with probability measures and calculating the probabilistic power flow with the combined use of cumulants and Gram-Charlier series; (ii) developing three risk-control strategies by specifying the smallest acceptable non-overload probability for each branch and the whole system, and specifying the non-overload margin for all branches in the whole system; (iii) formulating an overload risk index based on the non-overload probability and the non-overload margin defined; and (iv) developing a multi-objective transmission system expansion planning (TSEP) model with the objective functions composed of transmission investment and the overload risk index. The presented work represents a superior risk-control model for TSEP in terms of security, reliability and economy. The transmission expansion planning model with the three risk-control strategies demonstrates its feasibility in the case study using two typical power systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sub optimal resource allocation algorithm for Orthogonal Frequency Division Multiplexing (OFDM) based cooperative scheme is proposed. The system consists of multiple relays. Subcarrier space is divided into blocks and relays participating in cooperation are allocated specific blocks to be used with a user. To ensure unique subcarrier assignment system is constrained such that same block cannot be used by more than one user. Users are given fair block assignments while no restriction for maximum number of blocks a relay can employ is given. Forced cost based decisions [1] are used for block allocation. Simulation results show that this scheme outperforms a non cooperating scheme with sequential allocation with respect to power usage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

• Road crashes as a cause of disability • Disability in the study of road safety • Thai spinal injury study – Contextual information – beliefs and community – Transport system and hidden safety costs – Cambodia experience – Pakistan fatalism study • Feedback to policies and programs

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A frame-rate stereo vision system, based on non-parametric matching metrics, is described. Traditional metrics, such as normalized cross-correlation, are expensive in terms of logic. Non-parametric measures require only simple, parallelizable, functions such as comparators, counters and exclusive-or, and are thus very well suited to implementation in reprogrammable logic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluating the validity of formative variables has presented ongoing challenges for researchers. In this paper we use global criterion measures to compare and critically evaluate two alternative formative measures of System Quality. One model is based on the ISO-9126 software quality standard, and the other is based on a leading information systems research model. We find that despite both models having a strong provenance, many of the items appear to be non-significant in our study. We examine the implications of this by evaluating the quality of the criterion variables we used, and the performance of PLS when evaluating formative models with a large number of items. We find that our respondents had difficulty distinguishing between global criterion variables measuring different aspects of overall System Quality. Also, because formative indicators “compete with one another” in PLS, it may be difficult to develop a set of measures which are all significant for a complex formative construct with a broad scope and a large number of items. Overall, we suggest that there is cautious evidence that both sets of measures are valid and largely equivalent, although questions still remain about the measures, the use of criterion variables, and the use of PLS for this type of model evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here mixed convection boundary layer flow of a viscous fluid along a heated vertical semi-infinite plate is investigated in a non-absorbing medium. The relationship between convection and thermal radiation is established via boundary condition of second kind on the thermally radiating vertical surface. The governing boundary layer equations are transformed into dimensionless parabolic partial differential equations with the help of appropriate transformations and the resultant system is solved numerically by applying straightforward finite difference method along with Gaussian elimination technique. It is worthy to note that Prandlt number, Pr, is taken to be small (<< 1) which is appropriate for liquid metals. Moreover, the numerical results are demonstrated graphically by showing the effects of important physical parameters, namely, the modified Richardson number (or mixed convection parameter), Ri*, and surface radiation parameter, R, in terms of local skin friction and local Nusselt number coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.