928 resultados para Computational Intelligence System
Resumo:
Firms worldwide are taking major initiatives to reduce the carbon footprint of their supply chains in response to the growing governmental and consumer pressures. In real life, these supply chains face stochastic and non-stationary demand but most of the studies on inventory lot-sizing problem with emission concerns consider deterministic demand. In this paper, we study the inventory lot-sizing problem under non-stationary stochastic demand condition with emission and cycle service level constraints considering carbon cap-and-trade regulatory mechanism. Using a mixed integer linear programming model, this paper aims to investigate the effects of emission parameters, product- and system-related features on the supply chain performance through extensive computational experiments to cover general type business settings and not a specific scenario. Results show that cycle service level and demand coefficient of variation have significant impacts on total cost and emission irrespective of level of demand variability while the impact of product's demand pattern is significant only at lower level of demand variability. Finally, results also show that increasing value of carbon price reduces total cost, total emission and total inventory and the scope of emission reduction by increasing carbon price is greater at higher levels of cycle service level and demand coefficient of variation. The analysis of results helps supply chain managers to take right decision in different demand and service level situations.
Resumo:
Urinary bladder diseases are a common problem throughout the world and often difficult to accurately diagnose. Furthermore, they pose a heavy financial burden on health services. Urinary bladder tissue from male pigs was spectrophotometrically measured and the resulting data used to calculate the absorption, transmission, and reflectance parameters, along with the derived coefficients of scattering and absorption. These were employed to create a "generic" computational bladder model based on optical properties, simulating the propagation of photons through the tissue at different wavelengths. Using the Monte-Carlo method and fluorescence spectra of UV and blue excited wavelength, diagnostically important biomarkers were modeled. Additionally, the multifunctional noninvasive diagnostics system "LAKK-M" was used to gather fluorescence data to further provide essential comparisons. The ultimate goal of the study was to successfully simulate the effects of varying excited radiation wavelengths on bladder tissue to determine the effectiveness of photonics diagnostic devices. With increased accuracy, this model could be used to reliably aid in differentiating healthy and pathological tissues within the bladder and potentially other hollow organs.
Resumo:
This study examined the effects of computer assisted instruction (CAI) 1 hour per week for 18 weeks on changes in computational scores and attitudes of developmental mathematics students at schools with predominantly Black enrollment. Comparisons were made between students using CAI with differing software--PLATO, CSR or both together--and students using traditional instruction (TI) only.^ This study was conducted in the Dade County Public School System from February through June 1991, at two senior high schools. The dependent variables, the State Student Assessment Test (SSAT), and the School Subjects Attitude Scales (SSAS), measured students' computational scores and attitudes toward mathematics in 3 categories: interest, usefulness, and difficulty, respectively.^ Univariate analyses of variance were performed on the least squares mean differences from pretest to posttest for testing main effects and interactions. A t-test measured significant main effects and interactions. Results were interpreted at the.01 level of significance.^ Null hypotheses 1, 2, and 3 compared versions of CAI with the control group, for changes in mathematical computation scores measured with the SSAT. It could not be concluded that changes in standardized mathematics test scores of students using CAI with differing software 1 hour per week for 18 class hours combined with TI were significantly higher than changes in test scores for students receiving TI only.^ Null hypotheses 4, 5, and 6 tested the effects of CAI for attitudes toward mathematics for experimental groups against control groups measured with the SSAS. Changes in attitudes toward mathematics of students using CAI with differing software 1 hour per week for 18 class hours combined with TI were not significantly higher than attitude changes for students receiving TI only.^ Teacher effect on students' computational scores was a more influential variable than CAI. No interaction was found between gender and learning method on standardized mathematics test scores (null hypothesis 7). ^
Resumo:
This thesis develops and validates the framework of a specialized maintenance decision support system for a discrete part manufacturing facility. Its construction utilizes a modular approach based on the fundamental philosophy of Reliability Centered Maintenance (RCM). The proposed architecture uniquely integrates System Decomposition, System Evaluation, Failure Analysis, Logic Tree Analysis, and Maintenance Planning modules. It presents an ideal solution to the unique maintenance inadequacies of modern discrete part manufacturing systems. Well established techniques are incorporated as building blocks of the system's modules. These include Failure Mode Effect and Criticality Analysis (FMECA), Logic Tree Analysis (LTA), Theory of Constraints (TOC), and an Expert System (ES). A Maintenance Information System (MIS) performs the system's support functions. Validation was performed by field testing of the system at a Miami based manufacturing facility. Such a maintenance support system potentially reduces downtime losses and contributes to higher product quality output. Ultimately improved profitability is the final outcome. ^
Resumo:
This dissertation presents dynamic flow experiments with fluorescently labeled platelets to allow for spatial observation of wall attachment in inter-strut spacings, to investigate their relationship to flow patterns. Human blood with fluorescently labeled platelets was circulated through an in vitro system that produced physiologic pulsatile flow in (1) a parallel plate blow chamber that contained two-dimensional (2D) stents that feature completely recirculating flow, partially recirculating flow, and completely reattached flow, and (2) a three-dimensional (3D) cylindrical tube that contained stents of various geometric designs. ^ Flow detachment and reattachment points exhibited very low platelet deposition. Platelet deposition was very low in the recirculation regions in the 3D stents unlike the 2D stents. Deposition distal to a strut was always high in 2D and 3D stents. Spirally recirculating regions were found in 3D unlike in 2D stents, where the deposition was higher than at well-separated regions of recirculation. ^
Resumo:
With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.
Resumo:
This dissertation presents a system-wide approach, based on genetic algorithms, for the optimization of transfer times for an entire bus transit system. Optimization of transfer times in a transit system is a complicated problem because of the large set of binary and discrete values involved. The combinatorial nature of the problem imposes a computational burden and makes it difficult to solve by classical mathematical programming methods. ^ The genetic algorithm proposed in this research attempts to find an optimal solution for the transfer time optimization problem by searching for a combination of adjustments to the timetable for all the routes in the system. It makes use of existing scheduled timetables, ridership demand at all transfer locations, and takes into consideration the randomness of bus arrivals. ^ Data from Broward County Transit are used to compute total transfer times. The proposed genetic algorithm-based approach proves to be capable of producing substantial time savings compared to the existing transfer times in a reasonable amount of time. ^ The dissertation also addresses the issues related to spatial and temporal modeling, variability in bus arrival and departure times, walking time, as well as the integration of scheduling and ridership data. ^
Resumo:
This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.
Resumo:
Chloroperoxidase (CPO) is the most versatile heme-containing enzyme that catalyzes a broad spectrum of reactions. The remarkable feature of this enzyme is the high regio- and enantio-selectivity exhibited in CPO-catalyzed oxidation reactions. The aim of this dissertation is to elucidate the structural basis for regio- and enantio-selective transformations and investigate the application of CPO in biodegradation of synthetic dyes. ^ To unravel the mechanism of CPO-catalyzed regioselective oxidation of indole, the dissertation explored the structure of CPO-indole complex using paramagnetic relaxation and molecular modeling. The distances between the protons of indole and the heme iron revealed that the pyrrole ring of indole is oriented toward the heme with its 2-H pointing directly at the heme iron. This provides the first experimental and theoretical explanation for the "unexpected" regioselectivity of CPO-catalyzed indole oxidation. Furthermore, the residues including Leu 70, Phe 103, Ile 179, Val 182, Glu 183, and Phe 186 were found essential to the substrate binding to CPO. These results will serve as a lighthouse in guiding the design of CPO mutants with tailor-made activities for biotechnological applications. ^ To understand the origin of the enantioselectivity of CPO-catalyzed oxidation reactions, the interactions of CPO with substrates such as 2-(methylthio)thiophene were investigated by nuclear magnetic resonance spectroscopy (NMR) and computational techniques. In particular, the enantioselectivity is partly explained by the binding orientation of substrates. In third facet of this dissertation, a green and efficient system for degradation of synthetic dyes was developed. Several commercial dyes such as orange G were tested in the CPO-H2O 2-Cl- system, where degradation of these dyes was found very efficient. The presence of halide ions and acidic pH were found necessary to the decomposition of dyes. Significantly, the results revealed that this degradation of azo dyes involves a ferric hypochlorite intermediate of CPO (Fe-OCl), compound X.^
Resumo:
The main focus of this thesis is to address the relative localization problem of a heterogenous team which comprises of both ground and micro aerial vehicle robots. This team configuration allows to combine the advantages of increased accessibility and better perspective provided by aerial robots with the higher computational and sensory resources provided by the ground agents, to realize a cooperative multi robotic system suitable for hostile autonomous missions. However, in such a scenario, the strict constraints in flight time, sensor pay load, and computational capability of micro aerial vehicles limits the practical applicability of popular map-based localization schemes for GPS denied navigation. Therefore, the resource limited aerial platforms of this team demand simpler localization means for autonomous navigation. Relative localization is the process of estimating the formation of a robot team using the acquired inter-robot relative measurements. This allows the team members to know their relative formation even without a global localization reference, such as GPS or a map. Thus a typical robot team would benefit from a relative localization service since it would allow the team to implement formation control, collision avoidance, and supervisory control tasks, independent of a global localization service. More importantly, a heterogenous team such as ground robots and computationally constrained aerial vehicles would benefit from a relative localization service since it provides the crucial localization information required for autonomous operation of the weaker agents. This enables less capable robots to assume supportive roles and contribute to the more powerful robots executing the mission. Hence this study proposes a relative localization-based approach for ground and micro aerial vehicle cooperation, and develops inter-robot measurement, filtering, and distributed computing modules, necessary to realize the system. The research study results in three significant contributions. First, the work designs and validates a novel inter-robot relative measurement hardware solution which has accuracy, range, and scalability characteristics, necessary for relative localization. Second, the research work performs an analysis and design of a novel nonlinear filtering method, which allows the implementation of relative localization modules and attitude reference filters on low cost devices with optimal tuning parameters. Third, this work designs and validates a novel distributed relative localization approach, which harnesses the distributed computing capability of the team to minimize communication requirements, achieve consistent estimation, and enable efficient data correspondence within the network. The work validates the complete relative localization-based system through multiple indoor experiments and numerical simulations. The relative localization based navigation concept with its sensing, filtering, and distributed computing methods introduced in this thesis complements system limitations of a ground and micro aerial vehicle team, and also targets hostile environmental conditions. Thus the work constitutes an essential step towards realizing autonomous navigation of heterogenous teams in real world applications.
Resumo:
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Resumo:
Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables full spectrum CT in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical eects in the detector and are very noisy due to photon starvation. In this work, we proposed two methods based on machine learning to address the spectral distortion issue and to improve the material decomposition. This rst approach is to model distortions using an articial neural network (ANN) and compensate for the distortion in a statistical reconstruction. The second approach is to directly correct for the distortion in the projections. Both technique can be done as a calibration process where the neural network can be trained using 3D printed phantoms data to learn the distortion model or the correction model of the spectral distortion. This replaces the need for synchrotron measurements required in conventional technique to derive the distortion model parametrically which could be costly and time consuming. The results demonstrate experimental feasibility and potential advantages of ANN-based distortion modeling and correction for more accurate K-edge imaging with a PCXD. Given the computational eciency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.
Resumo:
This paper introduces a novel, in-depth approach of analyzing the differences in writing style between two famous Romanian orators, based on automated textual complexity indices for Romanian language. The considered authors are: (a) Mihai Eminescu, Romania’s national poet and a remarkable journalist of his time, and (b) Ion C. Brătianu, one of the most important Romanian politicians from the middle of the 18th century. Both orators have a common journalistic interest consisting in their desire to spread the word about political issues in Romania via the printing press, the most important public voice at that time. In addition, both authors exhibit writing style particularities, and our aim is to explore these differences through our ReaderBench framework that computes a wide range of lexical and semantic textual complexity indices for Romanian and other languages. The used corpus contains two collections of speeches for each orator that cover the period 1857–1880. The results of this study highlight the lexical and cohesive textual complexity indices that reflect very well the differences in writing style, measures relying on Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) semantic models.
Resumo:
Multiphase flows, type oil–water-gas are very common among different industrial activities, such as chemical industries and petroleum extraction, and its measurements show some difficulties to be taken. Precisely determining the volume fraction of each one of the elements that composes a multiphase flow is very important in chemical plants and petroleum industries. This work presents a methodology able to determine volume fraction on Annular and Stratified multiphase flow system with the use of neutrons and artificial intelligence, using the principles of transmission/scattering of fast neutrons from a 241Am-Be source and measurements of point flow that are influenced by variations of volume fractions. The proposed geometries used on the mathematical model was used to obtain a data set where the thicknesses referred of each material had been changed in order to obtain volume fraction of each phase providing 119 compositions that were used in the simulation with MCNP-X –computer code based on Monte Carlo Method that simulates the radiation transport. An artificial neural network (ANN) was trained with data obtained using the MCNP-X, and used to correlate such measurements with the respective real fractions. The ANN was able to correlate the data obtained on the simulation with MCNP-X with the volume fractions of the multiphase flows (oil-water-gas), both in the pattern of annular flow as stratified, resulting in a average relative error (%) for each production set of: annular (air= 3.85; water = 4.31; oil=1.08); stratified (air=3.10, water 2.01, oil = 1.45). The method demonstrated good efficiency in the determination of each material that composes the phases, thus demonstrating the feasibility of the technique.
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.