913 resultados para Stochastic Subspace System Identification


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter presents Radio Frequency Identification (RFID), which is one of the Automatic Identification and Data Capture (AIDC) technologies (Wamba and Boeck, 2008) and discusses the application of RFID in E-Commerce. Firstly RFID is defined and the tag and reader components of the RFID system are explained. Then historical context of RFID is briefly discussed. Next, RFID is contrasted with other AIDC technologies, especially the use of barcodes which are commonly applied in E-Commerce. Lastly, RFID applications in E-Commerce are discussed with the focus on achievable benefits and obstacles to successful applications of RFID in E-Commerce, and ways to alleviate them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A felelős vállalatirányítás egyik stratégiai jelentőségű tényezője a vállalati szintű kockázatkezelés, mely napjaink egyik legnagyobb kihívást jelentő területe a vállalatvezetés számára. A hatékony vállalati kockázatkezelés nem valósulhat meg kizárólag az általános, nemzetközi és hazai szakirodalomban megfogalmazott kockázatkezelési alapelvek követése mentén, a kockázatkezelési rendszer kialakítása során figyelembe kell venni mind az iparági, mind az adott vállalatra jellemző sajátosságokat. Mindez különösen fontos egy olyan speciális tevékenységet folytató vállalatnál, mint a villamosenergia-ipari átviteli rendszerirányító társaság (transmission system operator, TSO). A cikkben a magyar villamosenergia-ipari átviteli rendszerirányító társasággal együttműködésben készített kutatás keretében előálló olyan komplex elméleti és gyakorlati keretrendszert mutatnak be a szerzők, mely alapján az átviteli rendszerirányító társaság számára kialakítottak egy új, területenként egységes kockázatkezelési módszertant (fókuszban a kockázatok azonosításának és számszerűsítésének módszertani lépéseivel), mely alkalmas a vállalati szintű kockázati kitettség meghatározására. _______ This study handles one of today’s most challenging areas of enterprise management: the development and introduction of an integrated and efficient risk management system. For companies operating in specific network industries with a dominant market share and a key role in the national economy, such as electricity TSO’s, risk management is of stressed importance. The study introduces an innovative, mathematically and statistically grounded as well as economically reasoned management approach for the identification, individual effect calculation and summation of risk factors. Every building block is customized for the organizational structure and operating environment of the TSO. While the identification phase guarantees all-inclusivity, the calculation phase incorporates expert techniques and Monte Carlo simulation and the summation phase presents an expected combined distribution and value effect of risks on the company’s profit lines based on the previously undiscovered correlations between individual risk factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microstructure manipulation is a fundamental process to the study of biology and medicine, as well as to advance micro- and nano-system applications. Manipulation of microstructures has been achieved through various microgripper devices developed recently, which lead to advances in micromachine assembly, and single cell manipulation, among others. Only two kinds of integrated feedback have been demonstrated so far, force sensing and optical binary feedback. As a result, the physical, mechanical, optical, and chemical information about the microstructure under study must be extracted from macroscopic instrumentation, such as confocal fluorescence microscopy and Raman spectroscopy. In this research work, novel Micro-Opto-Electro-Mechanical-System (MOEMS) microgrippers are presented. These devices utilize flexible optical waveguides as gripping arms, which provide the physical means for grasping a microobject, while simultaneously enabling light to be delivered and collected. This unique capability allows extensive optical characterization of the structure being held such as transmission, reflection, or fluorescence. The microgrippers require external actuation which was accomplished by two methods: initially with a micrometer screw, and later with a piezoelectric actuator. Thanks to a novel actuation mechanism, the "fishbone", the gripping facets remain parallel within 1 degree. The design, simulation, fabrication, and characterization are systematically presented. The devices mechanical operation was verified by means of 3D finite element analysis simulations. Also, the optical performance and losses were simulated by the 3D-to-2D effective index (finite difference time domain FDTD) method as well as 3D Beam Propagation Method (3D-BPM). The microgrippers were designed to manipulate structures from submicron dimensions up to approximately 100 μm. The devices were implemented in SU-8 due to its suitable optical and mechanical properties. This work demonstrates two practical applications: the manipulation of single SKOV-3 human ovarian carcinoma cells, and the detection and identification of microparts tagged with a fluorescent "barcode" implemented with quantum dots. The novel devices presented open up new possibilities in the field of micromanipulation at the microscale, scalable to the nano-domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the growing number of wrongful convictions involving faulty eyewitness evidence and the strong reliance by jurors on eyewitness testimony, researchers have sought to develop safeguards to decrease erroneous identifications. While decades of eyewitness research have led to numerous recommendations for the collection of eyewitness evidence, less is known regarding the psychological processes that govern identification responses. The purpose of the current research was to expand the theoretical knowledge of eyewitness identification decisions by exploring two separate memory theories: signal detection theory and dual-process theory. This was accomplished by examining both system and estimator variables in the context of a novel lineup recognition paradigm. Both theories were also examined in conjunction with confidence to determine whether it might add significantly to the understanding of eyewitness memory. ^ In two separate experiments, both an encoding and a retrieval-based manipulation were chosen to examine the application of theory to eyewitness identification decisions. Dual-process estimates were measured through the use of remember-know judgments (Gardiner & Richardson-Klavehn, 2000). In Experiment 1, the effects of divided attention and lineup presentation format (simultaneous vs. sequential) were examined. In Experiment 2, perceptual distance and lineup response deadline were examined. Overall, the results indicated that discrimination and remember judgments (recollection) were generally affected by variations in encoding quality and response criterion and know judgments (familiarity) were generally affected by variations in retrieval options. Specifically, as encoding quality improved, discrimination ability and judgments of recollection increased; and as the retrieval task became more difficult there was a shift toward lenient choosing and more reliance on familiarity. ^ The application of signal detection theory and dual-process theory in the current experiments produced predictable results on both system and estimator variables. These theories were also compared to measures of general confidence, calibration, and diagnosticity. The application of the additional confidence measures in conjunction with signal detection theory and dual-process theory gave a more in-depth explanation than either theory alone. Therefore, the general conclusion is that eyewitness identifications can be understood in a more complete manor by applying theory and examining confidence. Future directions and policy implications are discussed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: During alternative splicing, the inclusion of an exon in the final mRNA molecule is determined by nuclear proteins that bind cis-regulatory sequences in a target pre-mRNA molecule. A recent study suggested that the regulatory codes of individual RNA-binding proteins may be nearly immutable between very diverse species such as mammals and insects. The model system Drosophila melanogaster therefore presents an excellent opportunity for the study of alternative splicing due to the availability of quality EST annotations in FlyBase. Methods: In this paper, we describe an in silico analysis pipeline to extract putative exonic splicing regulatory sequences from a multiple alignment of 15 species of insects. Our method, ESTs-to-ESRs (E2E), uses graph analysis of EST splicing graphs to identify mutually exclusive (ME) exons and combines phylogenetic measures, a sliding window approach along the multiple alignment and the Welch’s t statistic to extract conserved ESR motifs. Results: The most frequent 100% conserved word of length 5 bp in different insect exons was “ATGGA”. We identified 799 statistically significant “spike” hexamers, 218 motifs with either a left or right FDR corrected spike magnitude p-value < 0.05 and 83 with both left and right uncorrected p < 0.01. 11 genes were identified with highly significant motifs in one ME exon but not in the other, suggesting regulation of ME exon splicing through these highly conserved hexamers. The majority of these genes have been shown to have regulated spatiotemporal expression. 10 elements were found to match three mammalian splicing regulator databases. A putative ESR motif, GATGCAG, was identified in the ME-13b but not in the ME-13a of Drosophila N-Cadherin, a gene that has been shown to have a distinct spatiotemporal expression pattern of spliced isoforms in a recent study. Conclusions: Analysis of phylogenetic relationships and variability of sequence conservation as implemented in the E2E spikes method may lead to improved identification of ESRs. We found that approximately half of the putative ESRs in common between insects and mammals have a high statistical support (p < 0.01). Several Drosophila genes with spatiotemporal expression patterns were identified to contain putative ESRs located in one exon of the ME exon pairs but not in the other.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system's EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter's components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrated project delivery (IPD) method has recently emerged as an alternative to traditional delivery methods. It has the potential to overcome inefficiencies of traditional delivery methods by enhancing collaboration among project participants. Information and communication technology (ICT) facilitates IPD by effective management, processing and communication of information within and among organizations. While the benefits of IPD, and the role of ICT in realizing them, have been generally acknowledged, the US public construction sector is very slow in adopting IPD. The reasons are - lack of experience and inadequate understanding of IPD in public owner as confirmed by the results of the questionnaire survey conducted under this research study. The public construction sector should be aware of the value of IPD and should know the essentials for effective implementation of IPD principles - especially, they should be cognizant of the opportunities offered by advancements in ICT to realize this.^ In order to address the need an IPD Readiness Assessment Model (IPD-RAM) was developed in this research study. The model was designed with a goal to determine IPD readiness of a public owner organization considering selected IPD principles, and ICT levels, at which project functions were carried out. Subsequent analysis led to identification of possible improvements in ICTs that have the potential to increase IPD readiness scores. Termed as the gap identification, this process was used to formulate improvement strategies. The model had been applied to six Florida International University (FIU) construction projects (case studies). The results showed that the IPD readiness of the organization was considerably low and several project functions can be improved by using higher and/or advanced level ICT tools and methods. Feedbacks from a focus group comprised of FIU officials and an independent group of experts had been received at various stages of this research and had been utilized during development and implementation of the model. Focus group input was also helpful for validation of the model and its results. It was hoped that the model developed would be useful to construction owner organizations in order to assess their IPD readiness and to identify appropriate ICT improvement strategies.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Party identification traditionally is seen as an important linkage mechanism, connecting voters to the party system. Previous analyses have suggested that the level of party identity is in decline in Germany, and in this article, we first expand previous observations with more recent data. These suggest that the erosion of party identity continues up to the present time. An age-period-cohort analysis of the panel data of the SOEP panel suggests that period effects are significantly negative. Furthermore, it can be observed that throughout the 1992-2009 observation period, education level and political interest have become more important determinants of party identity. Contrary to some of the literature, therefore, it can be shown that the loss of party identity is concentrated among groups with lower levels of political sophistication, indicating that the socio-economic profile of the group with a sense of party identification has become more distinct compared to the population as a whole. In the discussion, we investigate the theoretical and democratic consequences of this trend.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this work is to determine the true cost incurred by the Republic of Ireland and Northern Ireland in order to meet their EU renewable electricity targets. The primary all-island of Ireland policy goal is that 40% of electricity will come from renewable sources in 2020. From this it is expected that wind generation on the Irish electricity system will be in the region of 32-37% of total generation. This leads to issues resulting from wind energy being a non-synchronous, unpredictable and variable source of energy use on a scale never seen before for a single synchronous system. If changes are not made to traditional operational practices, the efficient running of the electricity system will be directly affected by these issues in the coming years. Using models of the electricity system for the all-island grid of Ireland, the effects of high wind energy penetration expected to be present in 2020 are examined. These models were developed using a unit commitment, economic dispatch tool called PLEXOS which allows for a detailed representation of the electricity system to be achieved down to individual generator level. These models replicate the true running of the electricity system through use of day-ahead scheduling and semi-relaxed use of these schedules that reflects the Transmission System Operator's of real time decision making on dispatch. In addition, it carefully considers other non-wind priority dispatch generation technologies that have an effect on the overall system. In the models developed, three main issues associated with wind energy integration were selected to be examined in detail to determine the sensitivity of assumptions presented in other studies. These three issues include wind energy's non-synchronous nature, its variability and spatial correlation, and its unpredictability. This leads to an examination of the effects in three areas: the need for system operation constraints required for system security; different onshore to offshore ratios of installed wind energy; and the degrees of accuracy in wind energy forecasting. Each of these areas directly impact the way in which the electricity system is run as they address each of the three issues associated with wind energy stated above, respectively. It is shown that assumptions in these three areas have a large effect on the results in terms of total generation costs, wind curtailment and generator technology type dispatch. In particular accounting for these issues has resulted in wind curtailment being predicted in much larger quantities than had been previously reported. This would have a large effect on wind energy companies because it is already a very low profit margin industry. Results from this work have shown that the relaxation of system operation constraints is crucial to the economic running of the electricity system with large improvements shown in the reduction of wind curtailment and system generation costs. There are clear benefits in having a proportion of the wind installed offshore in Ireland which would help to reduce variability of wind energy generation on the system and therefore reduce wind curtailment. With envisaged future improvements in day-ahead wind forecasting from 8% to 4% mean absolute error, there are potential reductions in wind curtailment system costs and open cycle gas turbine usage. This work illustrates the consequences of assumptions in the areas of system operation constraints, onshore/offshore installed wind capacities and accuracy in wind forecasting to better inform the true costs associated with running Ireland's changing electricity system as it continues to decarbonise into the near future. This work also proposes to illustrate, through the use of Ireland as a case study, the effects that will become ever more prevalent in other synchronous systems as they pursue a path of increasing renewable energy generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scatter in medical imaging is typically cast off as image-related noise that detracts from meaningful diagnosis. It is therefore typically rejected or removed from medical images. However, it has been found that every material, including cancerous tissue, has a unique X-ray coherent scatter signature that can be used to identify the material or tissue. Such scatter-based tissue-identification provides the advantage of locating and identifying particular materials over conventional anatomical imaging through X-ray radiography. A coded aperture X-ray coherent scatter spectral imaging system has been developed in our group to classify different tissue types based on their unique scatter signatures. Previous experiments using our prototype have demonstrated that the depth-resolved coherent scatter spectral imaging system (CACSSI) can discriminate healthy and cancerous tissue present in the path of a non-destructive x-ray beam. A key to the successful optimization of CACSSI as a clinical imaging method is to obtain anatomically accurate phantoms of the human body. This thesis describes the development and fabrication of 3D printed anatomical scatter phantoms of the breast and lung.

The purpose of this work is to accurately model different breast geometries using a tissue equivalent phantom, and to classify these tissues in a coherent x-ray scatter imaging system. Tissue-equivalent anatomical phantoms were designed to assess the capability of the CACSSI system to classify different types of breast tissue (adipose, fibroglandular, malignant). These phantoms were 3D printed based on DICOM data obtained from CT scans of prone breasts. The phantoms were tested through comparison of measured scatter signatures with those of adipose and fibroglandular tissue from literature. Tumors in the phantom were modeled using a variety of biological tissue including actual surgically excised benign and malignant tissue specimens. Lung based phantoms have also been printed for future testing. Our imaging system has been able to define the location and composition of the various materials in the phantom. These phantoms were used to characterize the CACSSI system in terms of beam width and imaging technique. The result of this work showed accurate modeling and characterization of the phantoms through comparison of the tissue-equivalent form factors to those from literature. The physical construction of the phantoms, based on actual patient anatomy, was validated using mammography and computed tomography to visually compare the clinical images to those of actual patient anatomy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a screw theory based method termed constraint and position identification (CPI) approach to synthesize decoupled spatial translational compliant parallel manipulators (XYZ CPMs) with consideration of actuation isolation. The proposed approach is based on a systematic arrangement of rigid stages and compliant modules in a three-legged XYZ CPM system using the constraint spaces and the position spaces of the compliant modules. The constraint spaces and the position spaces are firstly derived based on the screw theory instead of using the rigid-body mechanism design experience. Additionally, the constraint spaces are classified into different constraint combinations, with typical position spaces depicted via geometric entities. Furthermore, the systematic synthesis process based on the constraint combinations and the geometric entities is demonstrated via several examples. Finally, several novel decoupled XYZ CPMs with monolithic configurations are created and verified by finite elements analysis. The present CPI approach enables experts and beginners to synthesize a variety of decoupled XYZ CPMs with consideration of actuation isolation by selecting an appropriate constraint and an optimal position for each of the compliant modules according to a specific application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of vehicle speed for Structural Health Monitoring (SHM) of bridges under operational conditions are studied in this paper. The moving vehicle is modelled as a single degree oscillator traversing a damaged beam at a constant speed. The bridge is modelled as simply supported Euler-Bernoulli beam with a breathing crack. The breathing crack is treated as a nonlinear system with bilinear stiffness characteristics related to the opening and closing of crack. The unevenness of the bridge deck is modelled using road classification according to ISO 8606:1995(E). The stochastic description of the unevenness of the road surface is used as an aid to monitor the health of the structure in its operational condition. Numerical simulations are conducted considering the effects of changing vehicle speed with regards to cumulant based statistical damage detection parameters. The detection and calibration of damage at different levels is based on an algorithm dependent on responses of the damaged beam due to passages of the load. Possibilities of damage detection and calibration under benchmarked and non-benchmarked cases are considered. Sensitivity of calibration values is studied. The findings of this paper are important for establishing the expectations from different vehicle speeds on a bridge for damage detection purposes using bridge-vehicle interaction where the bridge does not need to be closed for monitoring. The identification of bunching of these speed ranges provides guidelines for using the methodology developed in the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.