991 resultados para spatial error
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
This thesis entitled spatial and temporal variarion of microbial community structure in surficial sediments of cochin estuary.In the estuarine and coastal systems, organic matter (OM) is derived not only from autochthonous primary production, but also from allochthonous (terrestrial) organic matter (OM) delivered by river discharge and runoff. A significant portion of the OM sinks through the water column and is ultimately stored in carbon pool in the sediments.Analysis of spatial and temporal variation in benthic microbial community of a tropical estuary was conducted for the first time using non selective measures that affirms that PLFA approach is a sensitive and reliable method in determining microbial community structures of surficial sediments of estuary.The close relationship between the concentrations of the microbial fatty acids and total biomass indicates that bacteria could account for the largest proportion of the biomass in the sediments.This is first study that has documented the changes in microbial community composition linkage to biotic and abiotic variables in benthic estuarine ecosystem. This contemporaneous community will be the backdrop for understanding the response of autochthonous community to increasing anthropogenic stress.
Resumo:
The influence of salinity on phytoplankton varies widely, because different species have different salinity preferences. Like marine and aquatic species, many phytoplankton species exhibit tolerance to certain salinity, beyond which, it can inhibit their growth. Light is the most important factor that influences phytoplankton growth. In aquatic environments (lakes, sea or estuary) the light incident on the surface is rapidly reduced exponentially with depth (Krik, 1994). In estuaries, the major factor influencing the light availability is the suspended particulate matter, which attenuates and scatters the light. The light changes with time of the day and the season, affecting the amount of light penetrating the water column. Similarly, biological factor like copepod grazing is a major factor influencing the standing crop of phytoplankton. The copepod can actively graze up to 75% of the phytoplankton biomass in a tropical estuary (Tan et. al., 2004). It is in the context that the present study investigates the salinity, light (physical factors) and copepod grazing (biological factor) phytoplankton as the factors controlling phytoplankton growth and distribution
Resumo:
The present study is focused on the intensity distribution of rainfall in different classes and their contribution to the total seasonal rainfall. In addition, we studied the spatial and diurnal variation of the rainfall in the study areas. For the present study, we retrieved data from TRMM (Tropical Rain Measuring Mission) rain rate available in every 3 h temporal and 25 km spatial resolutions. Moreover, station rainfall data is used to validate the TRMM rain rate and found significant correlation between them (linear correlation coefficients are 0.96, 0.85, 0.75 and 0.63 for the stations Kota Bharu, Senai, Cameron highlands and KLIA, respectively). We selected four areas in the Peninsular Malaysia and they are south coastal, east coastal, west coastal and highland regions. Diurnal variation of frequency of rain occurrence is different for different locations. We noticed bimodal variation in the coastal areas in most of the seasons and unimodal variation in the highland/inland area. During the southwest monsoon period in the west coastal stations, there is no distinct diurnal variation. The distribution of different intensity classes during different seasons are explained in detail in the results
Resumo:
In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction – double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4 x 4 reversible gate called ‘HCG’ for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.
Resumo:
An Overview of known spatial clustering algorithms The space of interest can be the two-dimensional abstraction of the surface of the earth or a man-made space like the layout of a VLSI design, a volume containing a model of the human brain, or another 3d-space representing the arrangement of chains of protein molecules. The data consists of geometric information and can be either discrete or continuous. The explicit location and extension of spatial objects define implicit relations of spatial neighborhood (such as topological, distance and direction relations) which are used by spatial data mining algorithms. Therefore, spatial data mining algorithms are required for spatial characterization and spatial trend analysis. Spatial data mining or knowledge discovery in spatial databases differs from regular data mining in analogous with the differences between non-spatial data and spatial data. The attributes of a spatial object stored in a database may be affected by the attributes of the spatial neighbors of that object. In addition, spatial location, and implicit information about the location of an object, may be exactly the information that can be extracted through spatial data mining
Resumo:
Present study is focused on the spatiotemporal variation of the microbial population (bacteria, fungus and actinomycetes) in the grassland soils of tropical montane forest and its relation with important soil physico-chemical characteristics and nutrients. Different physico-chemical properties of the soil such as temperature, moisture content, organic carbon, available nitrogen, available phosphorous and available potassium have been studied. Results of the present study revealed that both microbial load and soil characteristics showed spatiotemporal variation. Microbial population of the grassland soils were characterized by high load of bacteria followed by fungus and actinomycetes. Microbial load was high during pre monsoon season, followed by post monsoon and monsoon. The microbial load varied with important soil physico-chemical properties and nutrients. Organic carbon content, available nitrogen and available phosphorous were positively correlated with bacterial load and the correlation is significant at 0.05 and 0.01 levels respectively. Available nitrogen and available phosphorous were positively correlated with fungus at 0.05 level significance. Moisture content was negatively correlated with actinomycetes at 0.01 level of significance. Organic carbon negatively correlated with actinomycetes load at 0.05 level of significance
Resumo:
Severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur over the eastern and northeastern states of India during the pre-monsoon season (March-May). Forecasting thunderstorms is one of the most difficult tasks in weather prediction, due to their rather small spatial and temporal extension and the inherent non-linearity of their dynamics and physics. In this paper, sensitivity experiments are conducted with the WRF-NMM model to test the impact of convective parameterization schemes on simulating severe thunderstorms that occurred over Kolkata on 20 May 2006 and 21 May 2007 and validated the model results with observation. In addition, a simulation without convective parameterization scheme was performed for each case to determine if the model could simulate the convection explicitly. A statistical analysis based on mean absolute error, root mean square error and correlation coefficient is performed for comparisons between the simulated and observed data with different convective schemes. This study shows that the prediction of thunderstorm affected parameters is sensitive to convective schemes. The Grell-Devenyi cloud ensemble convective scheme is well simulated the thunderstorm activities in terms of time, intensity and the region of occurrence of the events as compared to other convective schemes and also explicit scheme
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained
Resumo:
Heavy metals in the surface sediments of the two coastal ecosystems of Cochin, southwest India were assessed. The study intends to evaluate the degree of anthropogenic influence on heavy metal concentration in the sediments of the mangrove and adjacent estuarine stations using enrichment factor and geoaccumulation index. The inverse relationship of Cd and Zn with texture in the mangrove sediments suggested the anthropogenic enrichment of these metals in the mangrove systems. In the estuarine sediments, the absence of any significant correlation of the heavy metals with other sedimentary parameters and their strong interdependence revealed the possibility that the input is not through the natural weathering processes. The analysis of enrichment factor indicated a minor enrichment for Pb and Zn in mangrove sediments. While, extremely severe enrichment for Cd, moderate enrichment for Zn and minor enrichment of Pb were observed in estuarine system. The geo accumulation index exhibited very low values for all metals except Zn, indicating the sediments of the mangrove ecosystem are unpolluted to moderately polluted by anthropogenic activities. However, very strongly polluted condition for Cd and a moderately polluted condition for Zn were evident in estuarine sediments
Resumo:
This paper presents the first detailed investigation on the residual levels of organochlorine insecticide (OCI) concentrations in the Cochin estuarine sediment. It aims in elucidate their distribution and ecological impact on the aquatic system. Concentrations of persistent organochlorine compound (OC) were determined for 17 surface sediment samples which were collected from specific sites of Cochin Estuarine System (CES) over a period of November 2009 and November 2011. The contaminant levels in the CES were compared with other worldwide ecosystems. The sites bearing high concentration of organochlorine compounds are well associated with the complexities and low energy environment. Evaluation of ecotoxicological factors suggests that adverse biological effects are expected in certain areas of CES
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
The aim of this paper is the investigation of the error which results from the method of approximate approximations applied to functions defined on compact in- tervals, only. This method, which is based on an approximate partition of unity, was introduced by V. Mazya in 1991 and has mainly been used for functions defied on the whole space up to now. For the treatment of differential equations and boundary integral equations, however, an efficient approximation procedure on compact intervals is needed. In the present paper we apply the method of approximate approximations to functions which are defined on compact intervals. In contrast to the whole space case here a truncation error has to be controlled in addition. For the resulting total error pointwise estimates and L1-estimates are given, where all the constants are determined explicitly.
Resumo:
Digitales stochastisches Magnetfeld-Sensorarray Stefan Rohrer Im Rahmen eines mehrjährigen Forschungsprojektes, gefördert von der Deutschen Forschungsgesellschaft (DFG), wurden am Institut für Mikroelektronik (IPM) der Universität Kassel digitale Magnetfeldsensoren mit einer Breite bis zu 1 µm entwickelt. Die vorliegende Dissertation stellt ein aus diesem Forschungsprojekt entstandenes Magnetfeld-Sensorarray vor, das speziell dazu entworfen wurde, um digitale Magnetfelder schnell und auf minimaler Fläche mit einer guten räumlichen und zeitlichen Auflösung zu detektieren. Der noch in einem 1,0µm-CMOS-Prozess gefertigte Test-Chip arbeitet bis zu einer Taktfrequenz von 27 MHz bei einem Sensorabstand von 6,75 µm. Damit ist er das derzeit kleinste und schnellste digitale Magnetfeld-Sensorarray in einem Standard-CMOS-Prozess. Konvertiert auf eine 0,09µm-Technologie können Frequenzen bis 1 GHz erreicht werden bei einem Sensorabstand von unter 1 µm. In der Dissertation werden die wichtigsten Ergebnisse des Projekts detailliert beschrieben. Basis des Sensors ist eine rückgekoppelte Inverter-Anordnung. Als magnetfeldsensitives Element dient ein auf dem Hall-Effekt basierender Doppel-Drain-MAGFET, der das Verhalten der Kippschaltung beeinflusst. Aus den digitalen Ausgangsdaten kann die Stärke und die Polarität des Magnetfelds bestimmt werden. Die Gesamtanordnung bildet einen stochastischen Magnetfeld-Sensor. In der Arbeit wird ein Modell für das Kippverhalten der rückgekoppelten Inverter präsentiert. Die Rauscheinflüsse des Sensors werden analysiert und in einem stochastischen Differentialgleichungssystem modelliert. Die Lösung der stochastischen Differentialgleichung zeigt die Entwicklung der Wahrscheinlichkeitsverteilung des Ausgangssignals über die Zeit und welche Einflussfaktoren die Fehlerwahrscheinlichkeit des Sensors beeinflussen. Sie gibt Hinweise darauf, welche Parameter für das Design und Layout eines stochastischen Sensors zu einem optimalen Ergebnis führen. Die auf den theoretischen Berechnungen basierenden Schaltungen und Layout-Komponenten eines digitalen stochastischen Sensors werden in der Arbeit vorgestellt. Aufgrund der technologisch bedingten Prozesstoleranzen ist für jeden Detektor eine eigene kompensierende Kalibrierung erforderlich. Unterschiedliche Realisierungen dafür werden präsentiert und bewertet. Zur genaueren Modellierung wird ein SPICE-Modell aufgestellt und damit für das Kippverhalten des Sensors eine stochastische Differentialgleichung mit SPICE-bestimmten Koeffizienten hergeleitet. Gegenüber den Standard-Magnetfeldsensoren bietet die stochastische digitale Auswertung den Vorteil einer flexiblen Messung. Man kann wählen zwischen schnellen Messungen bei reduzierter Genauigkeit und einer hohen lokalen Auflösung oder einer hohen Genauigkeit bei der Auswertung langsam veränderlicher Magnetfelder im Bereich von unter 1 mT. Die Arbeit präsentiert die Messergebnisse des Testchips. Die gemessene Empfindlichkeit und die Fehlerwahrscheinlichkeit sowie die optimalen Arbeitspunkte und die Kennliniencharakteristik werden dargestellt. Die relative Empfindlichkeit der MAGFETs beträgt 0,0075/T. Die damit erzielbaren Fehlerwahrscheinlichkeiten werden in der Arbeit aufgelistet. Verglichen mit dem theoretischen Modell zeigt das gemessene Kippverhalten der stochastischen Sensoren eine gute Übereinstimmung. Verschiedene Messungen von analogen und digitalen Magnetfeldern bestätigen die Anwendbarkeit des Sensors für schnelle Magnetfeldmessungen bis 27 MHz auch bei kleinen Magnetfeldern unter 1 mT. Die Messungen der Sensorcharakteristik in Abhängigkeit von der Temperatur zeigen, dass die Empfindlichkeit bei sehr tiefen Temperaturen deutlich steigt aufgrund der Abnahme des Rauschens. Eine Zusammenfassung und ein ausführliches Literaturverzeichnis geben einen Überblick über den Stand der Technik.