968 resultados para Domain elimination method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software faults are expensive and cause serious damage, particularly if discovered late or not at all. Some software faults tend to be hidden. One goal of the thesis is to figure out the status quo in the field of software fault elimination since there are no recent surveys of the whole area. Basis for a structural framework is proposed for this unstructured field, paying attention to compatibility and how to find studies. Bug elimination means are surveyed, including bug knowhow, defect prevention and prediction, analysis, testing, and fault tolerance. The most common research issues for each area are identified and discussed, along with issues that do not get enough attention. Recommendations are presented for software developers, researchers, and teachers. Only the main lines of research are figured out. The main emphasis is on technical aspects. The survey was done by performing searches in IEEE, ACM, Elsevier, and Inspect databases. In addition, a systematic search was done for a few well-known related journals from recent time intervals. Some other journals, some conference proceedings and a few books, reports, and Internet articles have been investigated, too. The following problems were found and solutions for them discussed. Quality assurance is testing only is a common misunderstanding, and many checks are done and some methods applied only in the late testing phase. Many types of static review are almost forgotten even though they reveal faults that are hard to be detected by other means. Other forgotten areas are knowledge of bugs, knowing continuously repeated bugs, and lightweight means to increase reliability. Compatibility between studies is not always good, which also makes documents harder to understand. Some means, methods, and problems are considered method- or domain-specific when they are not. The field lacks cross-field research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing demand for fatty acid-free lecithin required modifications in existing purification methods. In this technical note we describe a purification procedure with the following steps: a) homogenization and extraction of yolks obtained from fresh eggs with acetone, b) solubilization with ethanol and solvent elimination and c) repeated solubilization/precipitation with petroleum ether/acetone. This crude extract was chromatographed on neutral alumina, which was exhaustively washed with chloroform before elution with chloroform:methanol, allowing the sequential separation of fatty acids and lecithin. Chromatographic behavior and mass spectra of the product are presented. This fast procedure yields fatty acid-free lecithin at a competitive cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is devoted to the development of numerical method to deal with convection diffusion dominated problem with reaction term, non - stiff chemical reaction and stiff chemical reaction. The technique is based on the unifying Eulerian - Lagrangian schemes (particle transport method) under the framework of operator splitting method. In the computational domain, the particle set is assigned to solve the convection reaction subproblem along the characteristic curves created by convective velocity. At each time step, convection, diffusion and reaction terms are solved separately by assuming that, each phenomenon occurs separately in a sequential fashion. Moreover, adaptivities and projection techniques are used to add particles in the regions of high gradients (steep fronts) and discontinuities and transfer a solution from particle set onto grid point respectively. The numerical results show that, the particle transport method has improved the solutions of CDR problems. Nevertheless, the method is time consumer when compared with other classical technique e.g., method of lines. Apart from this advantage, the particle transport method can be used to simulate problems that involve movingsteep/smooth fronts such as separation of two or more elements in the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bakgrunden och inspirationen till föreliggande studie är tidigare forskning i tillämpningar på randidentifiering i metallindustrin. Effektiv randidentifiering möjliggör mindre säkerhetsmarginaler och längre serviceintervall för apparaturen i industriella högtemperaturprocesser, utan ökad risk för materielhaverier. I idealfallet vore en metod för randidentifiering baserad på uppföljning av någon indirekt variabel som kan mätas rutinmässigt eller till en ringa kostnad. En dylik variabel för smältugnar är temperaturen i olika positioner i väggen. Denna kan utnyttjas som insignal till en randidentifieringsmetod för att övervaka ugnens väggtjocklek. Vi ger en bakgrund och motivering till valet av den geometriskt endimensionella dynamiska modellen för randidentifiering, som diskuteras i arbetets senare del, framom en flerdimensionell geometrisk beskrivning. I de aktuella industriella tillämpningarna är dynamiken samt fördelarna med en enkel modellstruktur viktigare än exakt geometrisk beskrivning. Lösningsmetoder för den s.k. sidledes värmeledningsekvationen har många saker gemensamt med randidentifiering. Därför studerar vi egenskaper hos lösningarna till denna ekvation, inverkan av mätfel och något som brukar kallas förorening av mätbrus, regularisering och allmännare följder av icke-välställdheten hos sidledes värmeledningsekvationen. Vi studerar en uppsättning av tre olika metoder för randidentifiering, av vilka de två första är utvecklade från en strikt matematisk och den tredje från en mera tillämpad utgångspunkt. Metoderna har olika egenskaper med specifika fördelar och nackdelar. De rent matematiskt baserade metoderna karakteriseras av god noggrannhet och låg numerisk kostnad, dock till priset av låg flexibilitet i formuleringen av den modellbeskrivande partiella differentialekvationen. Den tredje, mera tillämpade, metoden kännetecknas av en sämre noggrannhet förorsakad av en högre grad av icke-välställdhet hos den mera flexibla modellen. För denna gjordes även en ansats till feluppskattning, som senare kunde observeras överensstämma med praktiska beräkningar med metoden. Studien kan anses vara en god startpunkt och matematisk bas för utveckling av industriella tillämpningar av randidentifiering, speciellt mot hantering av olinjära och diskontinuerliga materialegenskaper och plötsliga förändringar orsakade av “nedfallande” väggmaterial. Med de behandlade metoderna förefaller det möjligt att uppnå en robust, snabb och tillräckligt noggrann metod av begränsad komplexitet för randidentifiering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Mathematica system (version 4.0) is employed in the solution of nonlinear difusion and convection-difusion problems, formulated as transient one-dimensional partial diferential equations with potential dependent equation coefficients. The Generalized Integral Transform Technique (GITT) is first implemented for the hybrid numerical-analytical solution of such classes of problems, through the symbolic integral transformation and elimination of the space variable, followed by the utilization of the built-in Mathematica function NDSolve for handling the resulting transformed ODE system. This approach ofers an error-controlled final numerical solution, through the simultaneous control of local errors in this reliable ODE's solver and of the proposed eigenfunction expansion truncation order. For covalidation purposes, the same built-in function NDSolve is employed in the direct solution of these partial diferential equations, as made possible by the algorithms implemented in Mathematica (versions 3.0 and up), based on application of the method of lines. Various numerical experiments are performed and relative merits of each approach are critically pointed out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aims to find an effective way of conducting a target audience analysis (TAA) in cyber domain. There are two main focal points that are addressed; the nature of the cyber domain and the method of the TAA. Of the cyber domain the object is to find the opportunities, restrictions and caveats that result from its digital and temporal nature. This is the environment in which the TAA method is examined in this study. As the TAA is an important step of any psychological operation and critical to its success, the method used must cover all the main aspects affecting the choice of a proper target audience. The first part of the research was done by sending an open-ended questionnaire to operators in the field of information warfare both in Finland and abroad. As the results were inconclusive, the research was completed by assessing the applicability of United States Army Joint Publication FM 3-05.301 in the cyber domain via a theory-based content analysis. FM 3- 05.301 was chosen because it presents a complete method of the TAA process. The findings were tested against the results of the questionnaire and new scientific research in the field of psychology. The cyber domain was found to be “fast and vast”, volatile and uncontrollable. Although governed by laws to some extent, the cyber domain is unpredictable by nature and not controllable to reasonable amount. The anonymity and lack of verification often present in the digital channels mean that anyone can have an opinion, and any message sent may change or even be counterproductive to the original purpose. The TAA method of the FM 3-05.301 is applicable in the cyber domain, although some parts of the method are outdated and thus suggested to be updated if used in that environment. The target audience categories of step two of the process were replaced by new groups that exist in the digital environment. The accessibility assessment (step eight) was also redefined, as in the digital media the mere existence of a written text is typically not enough to convey the intended message to the target audience. The scientific studies made in computer sciences and both in psychology and sociology about the behavior of people in social media (and overall in cyber domain) call for a more extensive remake of the TAA process. This falls, however, out of the scope of this work. It is thus suggested that further research should be carried out in search of computer-assisted methods and a more thorough TAA process, utilizing the latest discoveries of human behavior. ---------------------------------------------------------------------------------------------------------------------------------- Tämän opinnäytetyön tavoitteena on löytää tehokas tapa kohdeyleisöanalyysin tekemiseksi kybertoimintaympäristössä. Työssä keskitytään kahteen ilmiöön: kybertoimintaympäristön luonteeseen ja kohdeyleisöanalyysin metodiin. Kybertoimintaympäristön osalta tavoitteena on löytää sen digitaalisesta ja ajallisesta luonteesta juontuvat mahdollisuudet, rajoitteet ja sudenkuopat. Tämä on se ympäristö jossa kohdeyleisöanalyysiä tarkastellaan tässä työssä. Koska kohdeyleisöanalyysi kuuluu olennaisena osana jokaiseen psykologiseen operaatioon ja on onnistumisen kannalta kriittinen tekijä, käytettävän metodin tulee pitää sisällään kaikki oikean kohdeyleisön valinnan kannalta merkittävät osa-alueet. Tutkimuksen ensimmäisessä vaiheessa lähetettiin avoin kysely informaatiosodankäynnin ammattilaisille Suomessa ja ulkomailla. Koska kyselyn tulokset eivät olleet riittäviä johtopäätösten tekemiseksi, tutkimusta jatkettiin tarkastelemalla Yhdysvaltojen armeijan kenttäohjesäännön FM 3-05.301 soveltuvuutta kybertoimintaympäristössä käytettäväksi teorialähtöisen sisällönanalyysin avulla. FM 3-05.301 valittiin koska se sisältää kokonaisvaltaisen kohdeyleisöanalyysiprosessin. Havaintoja verrattiin kyselytutkimuksen tuloksiin ja psykologian uusiin tutkimuksiin. Kybertoimintaympäristö on tulosten perusteella nopea ja valtava, jatkuvasti muuttuva ja kontrolloimaton. Vaikkakin lait hallitsevat kybertoimintaympäristöä jossakin määrin, on se silti luonteeltaan ennakoimaton eikä sitä voida luotettavasti hallita. Digitaalisilla kanavilla usein läsnäoleva nimettömyys ja tiedon tarkastamisen mahdottomuus tarkoittavat että kenellä tahansa voi olla mielipide asioista, ja mikä tahansa viesti voi muuttua, jopa alkuperäiseen tarkoitukseen nähden vastakkaiseksi. FM 3-05.301:n metodi toimii kybertoimintaympäristössä, vaikkakin jotkin osa-alueet ovat vanhentuneita ja siksi ne esitetään päivitettäväksi mikäli metodia käytetään kyseisessä ympäristössä. Kohdan kaksi kohdeyleisökategoriat korvattiin uusilla, digitaalisessa ympäristössä esiintyvillä ryhmillä. Lähestyttävyyden arviointi (kohta 8) muotoiltiin myös uudestaan, koska digitaalisessa mediassa pelkkä tekstin läsnäolo ei sellaisenaan tyypillisesti vielä riitä halutun viestin välittämiseen kohdeyleisölle. Tietotekniikan edistyminen ja psykologian sekä sosiologian aloilla tehty tieteellinen tutkimus ihmisten käyttäytymisestä sosiaalisessa mediassa (ja yleensä kybertoimintaympäristössä) mahdollistavat koko kohdeyleisöanalyysiprosessin uudelleenrakentamisen. Tässä työssä sitä kuitenkaan ei voida tehdä. Siksi esitetäänkin että lisätutkimusta tulisi tehdä sekä tietokoneavusteisten prosessien että vielä syvällisempien kohdeyleisöanalyysien osalta, käyttäen hyväksi viimeisimpiä ihmisen käyttäytymiseen liittyviä tutkimustuloksia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

R,S-sotalol, a ß-blocker drug with class III antiarrhythmic properties, is prescribed to patients with ventricular, atrial and supraventricular arrhythmias. A simple and sensitive method based on HPLC-fluorescence is described for the quantification of R,S-sotalol racemate in 500 µl of plasma. R,S-sotalol and its internal standard (atenolol) were eluted after 5.9 and 8.5 min, respectively, from a 4-micron C18 reverse-phase column using a mobile phase consisting of 80 mM KH2PO4, pH 4.6, and acetonitrile (95:5, v/v) at a flow rate of 0.5 ml/min with detection at lex = 235 nm and lem = 310 nm, respectively. This method, validated on the basis of R,S-sotalol measurements in spiked blank plasma, presented 20 ng/ml sensitivity, 20-10,000 ng/ml linearity, and 2.9 and 4.8% intra- and interassay precision, respectively. Plasma sotalol concentrations were determined by applying this method to investigate five high-risk patients with atrial fibrillation admitted to the Emergency Service of the Medical School Hospital, who received sotalol, 160 mg po, as loading dose. Blood samples were collected from a peripheral vein at zero, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 6.0, 8.0, 12.0 and 24.0 h after drug administration. A two-compartment open model was applied. Data obtained, expressed as mean, were: CMAX = 1230 ng/ml, TMAX = 1.8 h, AUCT = 10645 ng h-1 ml-1, Kab = 1.23 h-1, a = 0.95 h-1, ß = 0.09 h-1, t(1/2)ß = 7.8 h, ClT/F = 3.94 ml min-1 kg-1, and Vd/F = 2.53 l/kg. A good systemic availability and a fast absorption were obtained. Drug distribution was reduced to the same extent in terms of total body clearance when patients and healthy volunteers were compared, and consequently elimination half-life remained unchanged. Thus, the method described in the present study is useful for therapeutic drug monitoring purposes, pharmacokinetic investigation and pharmacokinetic-pharmacodynamic sotalol studies in patients with tachyarrhythmias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thylakoid membrane fractions were prepared from specific regions of thylakoid membranes of spinach (Spinacia oleracea). These fractions, which include grana (83), stroma (T3), grana core (8S), margins (Ma) and purified stroma (Y100) were prepared using a non-detergent method including a mild sonication and aqueous two-phase partitioning. The significance of PSlla and PSII~ centres have been described extensively in the literature. Previous work has characterized two types of PSII centres which are proposed to exist in different regions of the thylakoid membrane. a-centres are suggested to aggregate in stacked regions of grana whereas ~-centres are located in unstacked regions of stroma lamellae. The goal of this study is to characterize photosystem II from the isolated membrane vesicles representing different regions of the higher plant thylakoid membrane. The low temperature absorption spectra have been deconvoluted via Gaussian decomposition to estimate the relative sub-components that contribute to each fractions signature absorption spectrum. The relative sizes of the functional PSII antenna and the fluorescence induction kinetics were measured and used to determine the relative contributions of PSlla and PSII~ to each fraction. Picosecond chlorophyll fluorescence decay kinetics were collected for each fraction to characterize and gain insight into excitation energy transfer and primary electron transport in PSlla and PSII~ centres. The results presented here clearly illustrate the widely held notions of PSII/PS·I and PSlIa/PSII~ spatial separation. This study suggests that chlorophyll fluorescence decay lifetimes of PSII~ centres are shorter than those of PSlIa centres and, at FM, the longer lived of the two PSII components renders a larger yield in PSlIa-rich fractions, but smaller in PSIlr3-rich fractions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Activity of the medial frontal cortex (MFC) has been implicated in attention regulation and performance monitoring. The MFC is thought to generate several event-related potential (ERPs) components, known as medial frontal negativities (MFNs), that are elicited when a behavioural response becomes difficult to control (e.g., following an error or shifting from a frequently executed response). The functional significance of MFNs has traditionally been interpreted in the context of the paradigm used to elicit a specific response, such as errors. In a series of studies, we consider the functional similarity of multiple MFC brain responses by designing novel performance monitoring tasks and exploiting advanced methods for electroencephalography (EEG) signal processing and robust estimation statistics for hypothesis testing. In study 1, we designed a response cueing task and used Independent Component Analysis (ICA) to show that the latent factors describing a MFN to stimuli that cued the potential need to inhibit a response on upcoming trials also accounted for medial frontal brain responses that occurred when individuals made a mistake or inhibited an incorrect response. It was also found that increases in theta occurred to each of these task events, and that the effects were evident at the group level and in single cases. In study 2, we replicated our method of classifying MFC activity to cues in our response task and showed again, using additional tasks, that error commission, response inhibition, and, to a lesser extent, the processing of performance feedback all elicited similar changes across MFNs and theta power. In the final study, we converted our response cueing paradigm into a saccade cueing task in order to examine the oscillatory dynamics of response preparation. We found that, compared to easy pro-saccades, successfully preparing a difficult anti-saccadic response was characterized by an increase in MFC theta and the suppression of posterior alpha power prior to executing the eye movement. These findings align with a large body of literature on performance monitoring and ERPs, and indicate that MFNs, along with their signature in theta power, reflects the general process of controlling attention and adapting behaviour without the need to induce error commission, the inhibition of responses, or the presentation of negative feedback.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’évolution récente des commutateurs de sélection de longueurs d’onde (WSS -Wavelength Selective Switch) favorise le développement du multiplexeur optique d’insertionextraction reconfigurable (ROADM - Reconfigurable Optical Add/Drop Multiplexers) à plusieurs degrés sans orientation ni coloration, considéré comme un équipement fort prometteur pour les réseaux maillés du futur relativement au multiplexage en longueur d’onde (WDM -Wavelength Division Multiplexing ). Cependant, leur propriété de commutation asymétrique complique la question de l’acheminement et de l’attribution des longueur d’ondes (RWA - Routing andWavelength Assignment). Or la plupart des algorithmes de RWA existants ne tiennent pas compte de cette propriété d’asymétrie. L’interruption des services causée par des défauts d’équipements sur les chemins optiques (résultat provenant de la résolution du problème RWA) a pour conséquence la perte d’une grande quantité de données. Les recherches deviennent ainsi incontournables afin d’assurer la survie fonctionnelle des réseaux optiques, à savoir, le maintien des services, en particulier en cas de pannes d’équipement. La plupart des publications antérieures portaient particulièrement sur l’utilisation d’un système de protection permettant de garantir le reroutage du trafic en cas d’un défaut d’un lien. Cependant, la conception de la protection contre le défaut d’un lien ne s’avère pas toujours suffisante en termes de survie des réseaux WDM à partir de nombreux cas des autres types de pannes devenant courant de nos jours, tels que les bris d’équipements, les pannes de deux ou trois liens, etc. En outre, il y a des défis considérables pour protéger les grands réseaux optiques multidomaines composés de réseaux associés à un domaine simple, interconnectés par des liens interdomaines, où les détails topologiques internes d’un domaine ne sont généralement pas partagés à l’extérieur. La présente thèse a pour objectif de proposer des modèles d’optimisation de grande taille et des solutions aux problèmes mentionnés ci-dessus. Ces modèles-ci permettent de générer des solutions optimales ou quasi-optimales avec des écarts d’optimalité mathématiquement prouvée. Pour ce faire, nous avons recours à la technique de génération de colonnes afin de résoudre les problèmes inhérents à la programmation linéaire de grande envergure. Concernant la question de l’approvisionnement dans les réseaux optiques, nous proposons un nouveau modèle de programmation linéaire en nombres entiers (ILP - Integer Linear Programming) au problème RWA afin de maximiser le nombre de requêtes acceptées (GoS - Grade of Service). Le modèle résultant constitue celui de l’optimisation d’un ILP de grande taille, ce qui permet d’obtenir la solution exacte des instances RWA assez grandes, en supposant que tous les noeuds soient asymétriques et accompagnés d’une matrice de connectivité de commutation donnée. Ensuite, nous modifions le modèle et proposons une solution au problème RWA afin de trouver la meilleure matrice de commutation pour un nombre donné de ports et de connexions de commutation, tout en satisfaisant/maximisant la qualité d’écoulement du trafic GoS. Relativement à la protection des réseaux d’un domaine simple, nous proposons des solutions favorisant la protection contre les pannes multiples. En effet, nous développons la protection d’un réseau d’un domaine simple contre des pannes multiples, en utilisant les p-cycles de protection avec un chemin indépendant des pannes (FIPP - Failure Independent Path Protecting) et de la protection avec un chemin dépendant des pannes (FDPP - Failure Dependent Path-Protecting). Nous proposons ensuite une nouvelle formulation en termes de modèles de flots pour les p-cycles FDPP soumis à des pannes multiples. Le nouveau modèle soulève un problème de taille, qui a un nombre exponentiel de contraintes en raison de certaines contraintes d’élimination de sous-tour. Par conséquent, afin de résoudre efficacement ce problème, on examine : (i) une décomposition hiérarchique du problème auxiliaire dans le modèle de décomposition, (ii) des heuristiques pour gérer efficacement le grand nombre de contraintes. À propos de la protection dans les réseaux multidomaines, nous proposons des systèmes de protection contre les pannes d’un lien. Tout d’abord, un modèle d’optimisation est proposé pour un système de protection centralisée, en supposant que la gestion du réseau soit au courant de tous les détails des topologies physiques des domaines. Nous proposons ensuite un modèle distribué de l’optimisation de la protection dans les réseaux optiques multidomaines, une formulation beaucoup plus réaliste car elle est basée sur l’hypothèse d’une gestion de réseau distribué. Ensuite, nous ajoutons une bande pasiv sante partagée afin de réduire le coût de la protection. Plus précisément, la bande passante de chaque lien intra-domaine est partagée entre les p-cycles FIPP et les p-cycles dans une première étude, puis entre les chemins pour lien/chemin de protection dans une deuxième étude. Enfin, nous recommandons des stratégies parallèles aux solutions de grands réseaux optiques multidomaines. Les résultats de l’étude permettent d’élaborer une conception efficace d’un système de protection pour un très large réseau multidomaine (45 domaines), le plus large examiné dans la littérature, avec un système à la fois centralisé et distribué.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis mainly focuses on material characterization in different environments: freely available samples taken in planar fonn, biological samples available in small quantities and buried objects.Free space method, finds many applications in the fields of industry, medicine and communication. As it is a non-contact method, it can be employed for monitoring the electrical properties of materials moving through a conveyor belt in real time. Also, measurement on such systems at high temperature is possible. NID theory can be applied to the characterization of thin films. Dielectric properties of thin films deposited on any dielectric substrate can be determined. ln chemical industry, the stages of a chemical reaction can be monitored online. Online monitoring will be more efficient as it saves time and avoids risk of sample collection.Dielectric contrast is one of the main factors, which decides the detectability of a system. lt could be noted that the two dielectric objects of same dielectric constant 3.2 (s, of plastic mine) placed in a medium of dielectric constant 2.56 (er of sand) could even be detected employing the time domain analysis of the reflected signal. This type of detection finds strategic importance as it provides solution to the problem of clearance of non-metallic mines. The demining of these mines using the conventional techniques had been proved futile. The studies on the detection of voids and leakage in pipes find many applications.The determined electrical properties of tissues can be used for numerical modeling of cells, microwave imaging, SAR test etc. All these techniques need the accurate determination of dielectric constant. ln the modem world, the use of cellular and other wireless communication systems is booming up. At the same time people are concemed about the hazardous effects of microwaves on living cells. The effect is usually studied on human phantom models. The construction of the models requires the knowledge of the dielectric parameters of the various body tissues. lt is in this context that the present study gains significance. The case study on biological samples shows that the properties of normal and infected body tissues are different. Even though the change in the dielectric properties of infected samples from that of normal one may not be a clear evidence of an ailment, it is an indication of some disorder.ln medical field, the free space method may be adapted for imaging the biological samples. This method can also be used in wireless technology. Evaluation of electrical properties and attenuation of obstacles in the path of RF waves can be done using free waves. An intelligent system for controlling the power output or frequency depending on the feed back values of the attenuation may be developed.The simulation employed in GPR can be extended for the exploration of the effects due to the factors such as the different proportion of water content in the soil, the level and roughness of the soil etc on the reflected signal. This may find applications in geological explorations. ln the detection of mines, a state-of-the art technique for scanning and imaging an active mine field can be developed using GPR. The probing antenna can be attached to a robotic arm capable of three degrees of rotation and the whole detecting system can be housed in a military vehicle. In industry, a system based on the GPR principle can be developed for monitoring liquid or gas through a pipe, as pipe with and without the sample gives different reflection responses. lt may also be implemented for the online monitoring of different stages of extraction and purification of crude petroleum in a plant.Since biological samples show fluctuation in the dielectric nature with time and other physiological conditions, more investigation in this direction should be done. The infected cells at various stages of advancement and the normal cells should be analysed. The results from these comparative studies can be utilized for the detection of the onset of such diseases. Studying the properties of infected tissues at different stages, the threshold of detectability of infected cells can be determined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Monte Carlo simulation study of the vacancy-assisted domain growth in asymmetric binary alloys is presented. The system is modeled using a three-state ABV Hamiltonian which includes an asymmetry term. Our simulated system is a stoichiometric two-dimensional binary alloy with a single vacancy which evolves according to the vacancy-atom exchange mechanism. We obtain that, compared to the symmetric case, the ordering process slows down dramatically. Concerning the asymptotic behavior it is algebraic and characterized by the Allen-Cahn growth exponent x51/2. The late stages of the evolution are preceded by a transient regime strongly affected by both the temperature and the degree of asymmetry of the alloy. The results are discussed and compared to those obtained for the symmetric case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of microcalcifications in mammograms can be considered as an early indication of breast cancer. A fastfractal block coding method to model the mammograms fordetecting the presence of microcalcifications is presented in this paper. The conventional fractal image coding method takes enormous amount of time during the fractal block encoding.procedure. In the proposed method, the image is divided intoshade and non shade blocks based on the dynamic range, andonly non shade blocks are encoded using the fractal encodingtechnique. Since the number of image blocks is considerablyreduced in the matching domain search pool, a saving of97.996% of the encoding time is obtained as compared to theconventional fractal coding method, for modeling mammograms.The above developed mammograms are used for detectingmicrocalcifications and a diagnostic efficiency of 85.7% isobtained for the 28 mammograms used.