93 resultados para 670200 Fibre Processing and Textiles


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is concerned with the genetic basis of normal human pigmentation variation. Specifically, the role of polymorphisms within the solute carrier family 45 member 2 (SLC45A2 or membrane associated transporter protein; MATP) gene were investigated with respect to variation in hair, skin and eye colour ― both between and within populations. SLC45A2 is an important regulator of melanin production and mutations in the gene underly the most recently identified form of oculocutaneous albinism. There is evidence to suggest that non-synonymous polymorphisms in SLC45A2 are associated with normal pigmentation variation between populations. Therefore, the underlying hypothesis of this thesis is that polymorphisms in SLC45A2 will alter the function or regulation of the protein, thereby altering the important role it plays in melanogenesis and providing a mechanism for normal pigmentation variation. In order to investigate the role that SLC45A2 polymorphisms play in human pigmentation variation, a DNA database was established which collected pigmentation phenotypic information and blood samples of more than 700 individuals. This database was used as the foundation for two association studies outlined in this thesis, the first of which involved genotyping two previously-described non-synonymous polymorphisms, p.Glu272Lys and p.Phe374Leu, in four different population groups. For both polymorphisms, allele frequencies were significantly different between population groups and the 272Lys and 374Leu alleles were strongly associated with black hair, brown eyes and olive skin colour in Caucasians. This was the first report to show that SLC45A2 polymorphisms were associated with normal human intra-population pigmentation variation. The second association study involved genotyping several SLC45A2 promoter polymorphisms to determine if they also played a role in pigmentation variation. Firstly, the transcription start site (TSS), and hence putative proximal promoter region, was identified using 5' RNA ligase mediated rapid amplification of cDNA ends (RLM-RACE). Two alternate TSSs were identified and the putative promoter region was screened for novel polymorphisms using denaturing high performance liquid chromatography (dHPLC). A novel duplication (c.–1176_–1174dupAAT) was identified along with other previously described single nucleotide polymorphisms (c.–1721C>G and c.–1169G>A). Strong linkage disequilibrium ensured that all three polymorphisms were associated with skin colour such that the –1721G, +dup and –1169A alleles were associated with olive skin in Caucasians. No linkage disequilibrium was observed between the promoter and coding region polymorphisms, suggesting independent effects. The association analyses were complemented with functional data, showing that the –1721G, +dup and –1169A alleles significantly decreased SLC45A2 transcriptional activity. Based on in silico bioinformatic analysis that showed these alleles remove a microphthalmia-associated transcription factor (MITF) binding site, and that MITF is a known regulator of SLC45A2 (Baxter and Pavan, 2002; Du and Fisher, 2002), it was postulated that SLC45A2 promoter polymorphisms could contribute to the regulation of pigmentation by altering MITF binding affinity. Further characterisation of the SLC45A2 promoter was carried out using luciferase reporter assays to determine the transcriptional activity of different regions of the promoter. Five constructs were designed of increasing length and their promoter activity evaluated. Constitutive promoter activity was observed within the first ~200 bp and promoter activity increased as the construct size increased. The functional impact of the –1721G, +dup and –1169A alleles, which removed a MITF consensus binding site, were assessed using electrophoretic mobility shift assays (EMSA) and expression analysis of genotyped melanoblast and melanocyte cell lines. EMSA results confirmed that the promoter polymorphisms affected DNA-protein binding. Interestingly, however, the protein/s involved were not MITF, or at least MITF was not the protein directly binding to the DNA. In an effort to more thoroughly characterise the functional consequences of SLC45A2 promoter polymorphisms, the mRNA expression levels of SLC45A2 and MITF were determined in melanocyte/melanoblast cell lines. Based on SLC45A2’s role in processing and trafficking TYRP1 from the trans-Golgi network to stage 2 melanosmes, the mRNA expression of TYRP1 was also investigated. Expression results suggested a coordinated expression of pigmentation genes. This thesis has substantially contributed to the field of pigmentation by showing that SLC45A2 polymorphisms not only show allele frequency differences between population groups, but also contribute to normal pigmentation variation within a Caucasian population. In addition, promoter polymorphisms have been shown to have functional consequences for SLC45A2 transcription and the expression of other pigmentation genes. Combined, the data presented in this work supports the notion that SLC45A2 is an important contributor to normal pigmentation variation and should be the target of further research to elucidate its role in determining pigmentation phenotypes. Understanding SLC45A2’s function may lead to the development of therapeutic interventions for oculocutaneous albinism and other disorders of pigmentation. It may also help in our understanding of skin cancer susceptibility and evolutionary adaptation to different UV environments, and contribute to the forensic application of pigmentation phenotype prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This special issue aims to provide up-to-date knowledge and the latest scientific concepts and technological developments in the processing, characterization, testing, mechanics, modeling and applications of a broad range of advanced materials. The many contributors, from Denmark, Germany, UK, Iran, Saudi Arabia, Malaysia, Japan, the People’s Republic of China, Singapore, Taiwan, USA, New Zealand and Australia, present a wide range of topics including: nanomaterials, thin films and coatings, metals and alloys, composite materials, materials processing and characterization, biomaterials and biomechanics, and computational materials science and simulation. The work will therefore be of great interest to a broad spectrum of researchers and technologists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we define and present a comprehensive classification of user intent for Web searching. The classification consists of three hierarchical levels of informational, navigational, and transactional intent. After deriving attributes of each, we then developed a software application that automatically classified queries using a Web search engine log of over a million and a half queries submitted by several hundred thousand users. Our findings show that more than 80% of Web queries are informational in nature, with about 10% each being navigational and transactional. In order to validate the accuracy of our algorithm, we manually coded 400 queries and compared the results from this manual classification to the results determined by the automated method. This comparison showed that the automatic classification has an accuracy of 74%. Of the remaining 25% of the queries, the user intent is vague or multi-faceted, pointing to the need for probabilistic classification. We discuss how search engines can use knowledge of user intent to provide more targeted and relevant results in Web searching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction The purpose of this study was to develop, implement and evaluate the impact of an educational intervention, comprising an innovative model of clinical decisionmaking and educational delivery strategy for facilitating nursing students‘ learning and development of competence in paediatric physical assessment practices. Background of the study Nursing students have an undergraduate education that aims to produce graduates of a generalist nature who demonstrate entry level competence for providing nursing care in a variety of health settings. Consistent with population morbidity and health care roles, paediatric nursing concepts typically form a comparatively small part of undergraduate curricula and students‘ exposure to paediatric physical assessment concepts and principles are brief. However, the nursing shortage has changed traditional nursing employment patterns and new graduates form the majority of the recruitment pool for paediatric nursing speciality staff. Paediatric nursing is a popular career choice for graduates and anecdotal evidence suggests that nursing students who select a clinical placement in their final year intend to seek employment in paediatrics upon graduation. Although concepts of paediatric nursing are included within undergraduate curriculum, students‘ ability to develop the required habits of mind to practice in what is still regarded as a speciality area of practice is somewhat limited. One of the areas of practice where this particularly impacts is in paediatric nursing physical assessment. Physical assessment is a fundamental component of nursing practice and competence in this area of practice is central to nursing students‘ development of clinical capability for practice as a registered nurse. Timely recognition of physiologic deterioration of patients is a key outcome of nurses‘ competent use of physical assessment strategies, regardless of the practice context. In paediatric nursing contexts children‘s physical assessment practices must specifically accommodate the child‘s different physiological composition, function and pattern of clinical deterioration (Hockenberry & Barrera, 2007). Thus, to effectively manage physical assessment of patients within the paediatric practice setting nursing students need to integrate paediatric nursing theory into their practice. This requires significant information processing and it is in this process where students are frequently challenged. The provision of rules or models can guide practice and assist novice-level nurses to develop their capabilities (Benner, 1984; Benner, Hooper-Kyriakidis & Stannard, 1999). Nursing practice models are cognitive tools that represent simplified patterns of expert analysis employing concepts that suit the limited reasoning of the inexperienced, and can represent the =rules‘ referred to by Benner (1984). Without a practice model of physical assessment students are likely to be uncertain about how to proceed with data collection, the interpretation of paediatric clinical findings and the appraisal of findings. These circumstances can result in ad hoc and unreliable nursing physical assessment that forms a poor basis for nursing decisions. The educational intervention developed as part of this study sought to resolve this problem and support nursing students‘ development of competence in paediatric physical assessment. Methods This study utilised the Context Input Process Product (CIPP) Model by Stufflebeam (2004) as the theoretical framework that underpinned the research design and evaluation methodology. Each of the four elements in the CIPP model were utilised to guide discrete stages of this study. The Context element informed design of the clinical decision-making process, the Paediatric Nursing Physical Assessment model. The Input element was utilised in appraising relevant literature, identifying an appropriate instructional methodology to facilitate learning and educational intervention delivery to undergraduate nursing students, and development of program content (the CD-ROM kit). Study One employed the Process element and used expert panel approaches to review and refine instructional methods, identifying potential barriers to obtaining an effective evaluation outcome. The Product element guided design and implementation of Study Two, which was conducted in two phases. Phase One employed a quasiexperimental between-subjects methodology to evaluate the impact of the educational intervention on nursing students‘ clinical performance and selfappraisal of practices in paediatric physical assessment. Phase Two employed a thematic analysis and explored the experiences and perspectives of a sample subgroup of nursing students who used the PNPA CD-ROM kit as preparation for paediatric clinical placement. Results Results from the Process review in Study One indicated that the prototype CDROM kit containing the PNPA model met the predetermined benchmarks for face validity and the impact evaluation instrumentation had adequate content validity in comparison with predetermined benchmarks. In the first phase of Study Two the educational intervention did not result in statistically significant differences in measures of student performance or self-appraisal of practice. However, in Phase Two qualitative commentary from students, and from the expert panel who reviewed the prototype CD-ROM kit (Study One, Phase One), strongly endorsed the quality of the intervention and its potential for supporting learning. This raises questions regarding transfer of learning and it is likely that, within this study, several factors have influenced students‘ transfer of learning from the educational intervention to the clinical practice environment, where outcomes were measured. Conclusion In summary, the educational intervention employed in this study provides insights into the potential e-learning approaches offer for delivering authentic learning experiences to undergraduate nursing students. Findings in this study raise important questions regarding possible pedagogical influences on learning outcomes, issues within the transfer of theory to practice and factors that may have influenced findings within the context of this study. This study makes a unique contribution to nursing education, specifically with respect to progressing an understanding of the challenges faced in employing instructive methods to impact upon nursing students‘ development of competence. The important contribution transfer of learning processes make to students‘ transition into the professional practice context and to their development of competence within the context of speciality practice is also highlighted. This study contributes to a greater awareness of the complexity of translating theoretical learning at undergraduate level into clinical practice, particularly within speciality contexts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acquiring accurate silhouettes has many applications in computer vision. This is usually done through motion detection, or a simple background subtraction under highly controlled environments (i.e. chroma-key backgrounds). Lighting and contrast issues in typical outdoor or office environments make accurate segmentation very difficult in these scenes. In this paper, gradients are used in conjunction with intensity and colour to provide a robust segmentation of motion, after which graph cuts are utilised to refine the segmentation. The results presented using the ETISEO database demonstrate that an improved segmentation is achieved through the combined use of motion detection and graph cuts, particularly in complex scenes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of iris recognition systems is significantly affected by the segmentation accuracy, especially in non- ideal iris images. This paper proposes an improved method to localise non-circular iris images quickly and accurately. Shrinking and expanding active contour methods are consolidated when localising inner and outer iris boundaries. First, the pupil region is roughly estimated based on histogram thresholding and morphological operations. There- after, a shrinking active contour model is used to precisely locate the inner iris boundary. Finally, the estimated inner iris boundary is used as an initial contour for an expanding active contour scheme to find the outer iris boundary. The proposed scheme is robust in finding exact the iris boundaries of non-circular and off-angle irises. In addition, occlusions of the iris images from eyelids and eyelashes are automatically excluded from the detected iris region. Experimental results on CASIA v3.0 iris databases indicate the accuracy of proposed technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Is there a role for prototyping (sketching, pattern making and sampling) in addressing real world problems of sustainability (People, Profit, and Planet), in this case social/healthcare issues, through fashion and textiles research? Skin cancer and related illnesses are a major cause of disfigurement and death in New Zealand and Australia where the rates of Melanoma, a serious form of skin cancer, are four times higher than in the Northern Hemisphere regions of USA, UK and Canada (IARC, 1992). In 2007, AUT University (Auckland University of Technology) Fashion Department and the Health Promotion Department of Cancer Society - Auckland Division (CSA) developed a prototype hat aimed at exploring a barrier type solution to prevent facial and neck skin damage. This is a paradigm shift from the usual medical research model. This paper provides an overview of the project and examines how a fashion prototype has been used to communicate emergent social, environmental, personal, physiological and technological concerns to the trans-disciplinary research team. The authors consider how the design of a product can enhance and support sustainable design practice while contributing a potential solution to an ongoing health issue. Analysis of this case study provides an insight into prototyping in fashion and textiles design, user engagement and the importance of requirements analysis in relation to sustainable development. The analysis and a successful outcome of the final prototype have provided a gateway to future collaborative research and product development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emerging data streaming applications in Wireless Sensor Networks require reliable and energy-efficient Transport Protocols. Our recent Wireless Sensor Network deployment in the Burdekin delta, Australia, for water monitoring [T. Le Dinh, W. Hu, P. Sikka, P. Corke, L. Overs, S. Brosnan, Design and deployment of a remote robust sensor network: experiences from an outdoor water quality monitoring network, in: Second IEEE Workshop on Practical Issues in Building Sensor Network Applications (SenseApp 2007), Dublin, Ireland, 2007] is one such example. This application involves streaming sensed data such as pressure, water flow rate, and salinity periodically from many scattered sensors to the sink node which in turn relays them via an IP network to a remote site for archiving, processing, and presentation. While latency is not a primary concern in this class of application (the sampling rate is usually in terms of minutes or hours), energy-efficiency is. Continuous long-term operation and reliable delivery of the sensed data to the sink are also desirable. This paper proposes ERTP, an Energy-efficient and Reliable Transport Protocol for Wireless Sensor Networks. ERTP is designed for data streaming applications, in which sensor readings are transmitted from one or more sensor sources to a base station (or sink). ERTP uses a statistical reliability metric which ensures the number of data packets delivered to the sink exceeds the defined threshold. Our extensive discrete event simulations and experimental evaluations show that ERTP is significantly more energyefficient than current approaches and can reduce energy consumption by more than 45% when compared to current approaches. Consequently, sensor nodes are more energy-efficient and the lifespan of the unattended WSN is increased.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research used the Queensland Police Service, Australia, as a major case study. Information on principles, techniques and processes used, and the reason for the recording, storing and release of audit information for evidentiary purposes is reported. It is shown that Law Enforcement Agencies have a two-fold interest in, and legal obligation pertaining to, audit trails. The first interest relates to the situation where audit trails are actually used by criminals in the commission of crime and the second to where audit trails are generated by the information systems used by the police themselves in support of the recording and investigation of crime. Eleven court cases involving Queensland Police Service audit trails used in evidence in Queensland courts were selected for further analysis. It is shown that, of the cases studied, none of the evidence presented was rejected or seriously challenged from a technical perspective. These results were further analysed and related to normal requirements for trusted maintenance of audit trail information in sensitive environments with discussion on the ability and/or willingness of courts to fully challenge, assess or value audit evidence presented. Managerial and technical frameworks for firstly what is considered as an environment where a computer system may be considered to be operating “properly” and, secondly, what aspects of education, training, qualifications, expertise and the like may be considered as appropriate for persons responsible within that environment, are both proposed. Analysis was undertaken to determine if audit and control of information in a high security environment, such as law enforcement, could be judged as having improved, or not, in the transition from manual to electronic processes. Information collection, control of processing and audit in manual processes used by the Queensland Police Service, Australia, in the period 1940 to 1980 was assessed against current electronic systems essentially introduced to policing in the decades of the 1980s and 1990s. Results show that electronic systems do provide for faster communications with centrally controlled and updated information readily available for use by large numbers of users who are connected across significant geographical locations. However, it is clearly evident that the price paid for this is a lack of ability and/or reluctance to provide improved audit and control processes. To compare the information systems audit and control arrangements of the Queensland Police Service with other government departments or agencies, an Australia wide survey was conducted. Results of the survey were contrasted with the particular results of a survey, conducted by the Australian Commonwealth Privacy Commission four years previous, to this survey which showed that security in relation to the recording of activity against access to information held on Australian government computer systems has been poor and a cause for concern. However, within this four year period there is evidence to suggest that government organisations are increasingly more inclined to generate audit trails. An attack on the overall security of audit trails in computer operating systems was initiated to further investigate findings reported in relation to the government systems survey. The survey showed that information systems audit trails in Microsoft Corporation's “Windows” operating system environments are relied on quite heavily. An audit of the security for audit trails generated, stored and managed in the Microsoft “Windows 2000” operating system environment was undertaken and compared and contrasted with similar such audit trail schemes in the “UNIX” and “Linux” operating systems. Strength of passwords and exploitation of any security problems in access control were targeted using software tools that are freely available in the public domain. Results showed that such security for the “Windows 2000” system is seriously flawed and the integrity of audit trails stored within these environments cannot be relied upon. An attempt to produce a framework and set of guidelines for use by expert witnesses in the information technology (IT) profession is proposed. This is achieved by examining the current rules and guidelines related to the provision of expert evidence in a court environment, by analysing the rationale for the separation of distinct disciplines and corresponding bodies of knowledge used by the Medical Profession and Forensic Science and then by analysing the bodies of knowledge within the discipline of IT itself. It is demonstrated that the accepted processes and procedures relevant to expert witnessing in a court environment are transferable to the IT sector. However, unlike some discipline areas, this analysis has clearly identified two distinct aspects of the matter which appear particularly relevant to IT. These two areas are; expertise gained through the application of IT to information needs in a particular public or private enterprise; and expertise gained through accepted and verifiable education, training and experience in fundamental IT products and system.