949 resultados para collection system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Bahamas is a small island nation that is dealing with the problem of freshwater shortage. All of the country’s freshwater is contained in shallow lens aquifers that are recharged solely by rainfall. The country has been struggling to meet the water demands by employing a combination of over-pumping of aquifers, transport of water by barge between islands, and desalination of sea water. In recent decades, new development on New Providence, where the capital city of Nassau is located, has created a large area of impervious surfaces and thereby a substantial amount of runoff with the result that several of the aquifers are not being recharged. A geodatabase was assembled to assess and estimate the quantity of runoff from these impervious surfaces and potential recharge locations were identified using a combination of Geographic Information Systems (GIS) and remote sensing. This study showed that runoff from impervious surfaces in New Providence represents a large freshwater resource that could potentially be used to recharge the lens aquifers on New Providence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enterprise Resource Planning (ERP) systems are software programs designed to integrate the functional requirements, and operational information needs of a business. Pressures of competition and entry standards for participation in major manufacturing supply chains are creating greater demand for small business ERP systems. The proliferation of new offerings of ERP systems introduces complexity to the selection process to identify the right ERP business software for a small and medium-sized enterprise (SME). The selection of an ERP system is a process in which a faulty conclusion poses a significant risk of failure to SME’s. The literature reveals that there are still very high failure rates in ERP implementation, and that faulty selection processes contribute to this failure rate. However, the literature is devoid of a systematic methodology for the selection process for an ERP system by SME’s. This study provides a methodological approach to selecting the right ERP system for a small or medium-sized enterprise. The study employs Thomann’s meta-methodology for methodology development; a survey of SME’s is conducted to inform the development of the methodology, and a case study is employed to test, and revise the new methodology. The study shows that a rigorously developed, effective methodology that includes benchmarking experiences has been developed and successfully employed. It is verified that the methodology may be applied to the domain of users it was developed to serve, and that the test results are validated by expert users and stakeholders. Future research should investigate in greater detail the application of meta-methodologies to supplier selection and evaluation processes for services and software; additional research into the purchasing practices of small firms is clearly needed.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The etiology of central nervous system tumors (CNSTs) is mainly unknown. Aside from extremely rare genetic conditions, such as neurofibromatosis and tuberous sclerosis, the only unequivocally identified risk factor is exposure to ionizing radiation, and this explains only a very small fraction of cases. Using meta-analysis, gene networking and bioinformatics methods, this dissertation explored the hypothesis that environmental exposures produce genetic and epigenetic alterations that may be involved in the etiology of CNSTs. A meta-analysis of epidemiological studies of pesticides and pediatric brain tumors revealed a significantly increased risk of brain tumors among children whose mothers had farm-related exposures during pregnancy. A dose response was recognized when this risk estimate was compared to those for risk of brain tumors from maternal exposure to non-agricultural pesticides during pregnancy, and risk of brain tumors among children exposed to agricultural activities. Through meta-analysis of several microarray studies which compared normal tissue to astrocytomas, we were able to identify a list of 554 genes which were differentially expressed in the majority of astrocytomas. Many of these genes have in fact been implicated in development of astrocytoma, including EGFR, HIF-1α, c-Myc, WNT5A, and IDH3A. Reverse engineering of these 554 genes using Bayesian network analysis produced a gene network for each grade of astrocytoma (Grade I-IV), and ‘key genes’ within each grade were identified. Genes found to be most influential to development of the highest grade of astrocytoma, Glioblastoma multiforme (GBM) were: COL4A1, EGFR, BTF3, MPP2, RAB31, CDK4, CD99, ANXA2, TOP2A, and SERBP1. Lastly, bioinformatics analysis of environmental databases and curated published results on GBM was able to identify numerous potential pathways and geneenvironment interactions that may play key roles in astrocytoma development. Findings from this research have strong potential to advance our understanding of the etiology and susceptibility to CNSTs. Validation of our ‘key genes’ and pathways could potentially lead to useful tools for early detection and novel therapeutic options for these tumors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rates of survival of victims of sudden cardiac arrest (SCA) using cardio pulmonary resuscitation (CPR) have shown little improvement over the past three decades. Since registered nurses (RNs) comprise the largest group of healthcare providers in U.S. hospitals, it is essential that they are competent in performing the four primary measures (compression, ventilation, medication administration, and defibrillation) of CPR in order to improve survival rates of SCA patients. The purpose of this experimental study was to test a color-coded SMOCK system on: 1) time to implement emergency patient care measures 2) technical skills performance 3) number of medical errors, and 4) team performance during simulated CPR exercises. The study sample was 260 RNs (M 40 years, SD=11.6) with work experience as an RN (M 7.25 years, SD=9.42).Nurses were allocated to a control or intervention arm consisting of 20 groups of 5-8 RNs per arm for a total of 130 RNs in each arm. Nurses in each study arm were given clinical scenarios requiring emergency CPR. Nurses in the intervention group wore different color labeled aprons (smocks) indicating their role assignment (medications, ventilation, compression, defibrillation, etc) on the code team during CPR. Findings indicated that the intervention using color-labeled smocks for pre-assigned roles had a significant effect on the time nurses started compressions (t=3.03, p=0.005), ventilations (t=2.86, p=0.004) and defibrillations (t=2.00, p=.05) when compared to the controls using the standard of care. In performing technical skills, nurses in the intervention groups performed compressions and ventilations significantly better than those in the control groups. The control groups made significantly (t=-2.61, p=0.013) more total errors (7.55 SD 1.54) than the intervention group (5.60, SD 1.90). There were no significant differences in team performance measures between the groups. Study findings indicate use of colored labeled smocks during CPR emergencies resulted in: shorter times to start emergency CPR; reduced errors; more technical skills completed successfully; and no differences in team performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was conducted to determine if the use of the technology known as Classroom Performance System (CPS), specifically referred to as "Clickers", improves the learning gains of students enrolled in a biology course for science majors. CPS is one of a group of developing technologies adapted for providing feedback in the classroom using a learner-centered approach. It supports and facilitates discussion among students and between them and teachers, and provides for participation by passive students. Advocates, influenced by constructivist theories, claim increased academic achievement. In science teaching, the results have been mixed, but there is some evidence of improvements in conceptual understanding. The study employed a pretest-posttest, non-equivalent groups experimental design. The sample consisted of 226 participants in six sections of a college biology course at a large community college in South Florida with two instructors trained in the use of clickers. Each instructor randomly selected their sections into CPS (treatment) and non-CPS (control) groups. All participants filled out a survey that included demographic data at the beginning of the semester. The treatment group used clicker questions throughout, with discussions as necessary, whereas the control groups answered the same questions as quizzes, similarly engaging in discussion where necessary. The learning gains were assessed on a pre/post-test basis. The average learning gains, defined as the actual gain divided by the possible gain, were slightly better in the treatment group than in the control group, but the difference was statistically non-significant. An Analysis of Covariance (ANCOVA) statistic with pretest scores as the covariate was conducted to test for significant differences between the treatment and control groups on the posttest. A second ANCOVA was used to determine the significance of differences between the treatment and control groups on the posttest scores, after controlling for sex, GPA, academic status, experience with clickers, and instructional style. The results indicated a small increase in learning gains but these were not statistically significant. The data did not support an increase in learning based on the use of the CPS technology. This study adds to the body of research that questions whether CPS technology merits classroom adaptation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficient and reliable techniques for power delivery and utilization are needed to account for the increased penetration of renewable energy sources in electric power systems. Such methods are also required for current and future demands of plug-in electric vehicles and high-power electronic loads. Distributed control and optimal power network architectures will lead to viable solutions to the energy management issue with high level of reliability and security. This dissertation is aimed at developing and verifying new techniques for distributed control by deploying DC microgrids, involving distributed renewable generation and energy storage, through the operating AC power system. To achieve the findings of this dissertation, an energy system architecture was developed involving AC and DC networks, both with distributed generations and demands. The various components of the DC microgrid were designed and built including DC-DC converters, voltage source inverters (VSI) and AC-DC rectifiers featuring novel designs developed by the candidate. New control techniques were developed and implemented to maximize the operating range of the power conditioning units used for integrating renewable energy into the DC bus. The control and operation of the DC microgrids in the hybrid AC/DC system involve intelligent energy management. Real-time energy management algorithms were developed and experimentally verified. These algorithms are based on intelligent decision-making elements along with an optimization process. This was aimed at enhancing the overall performance of the power system and mitigating the effect of heavy non-linear loads with variable intensity and duration. The developed algorithms were also used for managing the charging/discharging process of plug-in electric vehicle emulators. The protection of the proposed hybrid AC/DC power system was studied. Fault analysis and protection scheme and coordination, in addition to ideas on how to retrofit currently available protection concepts and devices for AC systems in a DC network, were presented. A study was also conducted on the effect of changing the distribution architecture and distributing the storage assets on the various zones of the network on the system's dynamic security and stability. A practical shipboard power system was studied as an example of a hybrid AC/DC power system involving pulsed loads. Generally, the proposed hybrid AC/DC power system, besides most of the ideas, controls and algorithms presented in this dissertation, were experimentally verified at the Smart Grid Testbed, Energy Systems Research Laboratory. All the developments in this dissertation were experimentally verified at the Smart Grid Testbed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the discussion - The Nevada Gaming Debt Collection Experience - by Larry D. Strate, Assistant Professor, College of Business and Economics at the University of Nevada, Las Vegas, Assistant Professor Strate initially outlines the article by saying: “Even though Nevada has had over a century of legalized gaming experience, the evolution of gaming debt collection has been a recent phenomenon. The author traces that history and discusses implications of the current law.” The discussion opens with a comparison between the gaming industries of New Jersey/Atlantic City, and Las Vegas, Nevada. This contrast serves to point out the disparities in debt handling between the two. “There are major differences in the development of legalized gaming for both Nevada and Atlantic City. Nevada has had over a century of legalized gambling; Atlantic City, New Jersey, has completed a decade of its operation,” Strate informs you. “Nevada's gaming industry has been its primary economic base for many years; Atlantic City's entry into gaming served as a possible solution to a social problem. Nevada's processes of legalized gaming, credit play, and the collection of gaming debts were developed over a period of 125 years; Atlantic City's new industry began with gaming, gaming credit, and gaming debt collection simultaneously in 1976 [via the New Jersey Casino Control Act] .” The irony here is that Atlantic City, being the younger venue, had or has a better system for handling debt collection than do the historic and traditional Las Vegas properties. Many of these properties were duplicated in New Jersey, so the dichotomy existed whereby New Jersey casinos could recoup debt while their Nevada counterparts could not. “It would seem logical that a "territory" which permitted gambling in the early 1800’s would have allowed the Nevada industry to collect its debts as any other legal enterprise. But it did not,” Strate says. Of course, this situation could not be allowed to continue and Strate outlines the evolution. New Jersey tactfully benefitted from Nevada’s experience. “The fundamental change in gaming debt collection came through the legislature as the judicial decisions had declared gaming debts uncollectable by either a patron or a casino,” Strate informs you. “Nevada enacted its gaming debt collection act in 1983, six years after New Jersey,” Strate points out. One of the most noteworthy paragraphs in the entire article is this: “The fundamental change in 1983, and probably the most significant change in the history of gaming in Nevada since the enactment of the Open Gaming Law of 1931, was to allow non-restricted gaming licensees* to recover gaming debts evidenced by a credit instrument. The new law incorporated previously litigated terms with a new one, credit instrument.” The term is legally definable and gives Nevada courts an avenue of due process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the establishment of the evaluation system in 1975, the junior colleges in the Republic of China (Taiwan), have gone through six formal evaluations. We know that evaluation in schooling, like quality control in businesses, should be a systematic, formal, and a continual process. It can doubtless serve as a strategy to refine the quality of education. The purpose of this research is to explore the current practice of junior college evaluation in Taiwan. This provides insight into the development of and quality of the current evaluation system. Moreover, this study also identified the source of problems with the current evaluation system and provided suggestion for improvements.^ In order to attain the above purposes, this research was undertaken in both theoretical and practical ways. First, theoretically, on the basis of a literature review, the theories of educational evaluation and, according to the course and principles of development, a view of the current practice in Taiwan. Secondly, in practice, by means of questionnaires, an analysis of the views of evaluation committeemen, junior college presidents, and administrators were obtained on evaluation models, methods, contents, organization, functions, criteria, grades reports, and others with suggestions for improvement. The summary of findings concludes that most evaluators and evaluatees think the purpose of evaluation can help the colleges explore their difficulties and problems. In addition, it was found that there is significant difference between the two groups regarding the evaluation methods, contents, organization, functions, criteria, grades reports and others, while analyzing these objective data forms the basis for an improved method of evaluation for Junior Colleges in Taiwan. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Institutions have implemented many campus interventions to address student persistence/retention, one of which is Early Warning Systems (EWS). However, few research studies show evidence of interventions that incorporate noncognitive factors/skills, and psychotherapy/psycho-educational processes in the EWS. A qualitative study (phenomenological interview and document analysis) of EWS at both a public and private 4-year Florida university was conducted to explore EWS through the eyes of the administrators of the ways administrators make sense of students' experiences and the services they provide and do not provide to assist students. Administrators' understanding of noncognitive factors and the executive skills subset and their contribution to retention and the executive skills development of at-risk students were also explored. Hossler and Bean's multiple retention lenses theory/paradigms and Perez's retention strategies were used to guide the study. Six administrators from each institution who oversee and/or assist with EWS for first time in college undergraduate students considered academically at-risk for attrition were interviewed. Among numerous findings, at Institution X: EWS was infrequently identified as a service, EWS training was not conducted, numerous cognitive and noncognitive issues/deficits were identified for students, and services/critical departments such as EWS did not work together to share students' information to benefit students. Assessment measures were used to identify students' issues/deficits; however, they were not used to assess, track, and monitor students' issues/deficits. Additionally, the institution's EWS did address students' executive skills function beyond time management and organizational skills, but did not address students' psychotherapy/psycho-educational processes. Among numerous findings, at Institution Y: EWS was frequently identified as a service, EWS training was not conducted, numerous cognitive and noncognitive issues/deficits were identified for students, and services/critical departments such as EWS worked together to share students' information to benefit students. Assessment measures were used to identify, track, and monitor students' issues/deficits; however, they were not used to assess students' issues/deficits. Additionally, the institution's EWS addressed students' executive skills function beyond time management and organizational skills, and psychotherapy/psycho-educational processes. Based on the findings, Perez's retention strategies were not utilized in EWS at Institution X, yet were collectively utilized in EWS at Institution Y, to achieve Hossler and Bean's retention paradigms. Future research could be designed to test the link between engaging in the specific promising activities identified in this research (one-to-one coaching, participation in student success workshops, academic contracts, and tutoring) and student success (e.g., higher GPA, retention). Further, because this research uncovered some concern with how to best handle students with physical and psychological disabilities, future research could link these same promising strategies for improving student performance for example among ADHD students or those with clinical depression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system's EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter's components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled