896 resultados para computer forensics, digital evidence, computer profiling, time-lining, temporal inconsistency, computer forensic object model
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Resumo:
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.
Resumo:
Planetary waves are key to large-scale dynamical adjustment in the global ocean as they transfer energy from the east to the west side of oceanic basins; they connect the forcing in the ocean interior with the variability at its boundaries: and they change the local heat content, thus coupling oceanic, atmospheric, and biological processes. Planetary waves, mostly of the first baroclinic mode, are observed as distinctive patterns in global time series of sea surface height anomaly (SSHA) and heat storage. The goal of this study is to compare and validate large-scale SSHA signals from coupled ocean-atmosphere general circulation Model for Interdisciplinary Research on Climate (MIROC) with TOPEX/POSEIDON satellite altimeter observations. The last decade of the models` time series is selected for comparison with the altimeter data. The wave patterns are separated from the meso- and large-scale SSHA signals by digital filters calibrated to select the same spectral bands in both model and altimeter data. The band-wise comparison allows for an assessment of the model skill to simulate the dynamical components of the observed wave field. Comparisons regarding both the seasonal cycle and the Rossby wave Held differ significantly among basins. When carried within the same basin, differences can occur between equal latitudes in opposite hemispheres. Furthermore, at some latitudes the MIROC reproduces biannual, annual and semiannual planetary waves with phase speeds and average amplitudes similar to those observed by the altimeter, but with significant differences in phase. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Induction of apoptotic cell death in response to chemotherapy and other external stimuli has proved extremely difficult in melanoma, leading to tumor progression, metastasis formation and resistance to therapy. A promising approach for cancer chemotherapy is the inhibition of proteasomal activity, as the half-life of the majority of cellular proteins is under proteasomal control and inhibitors have been shown to induce cell death programs in a wide variety of tumor cell types. 4-Nerolidylcatechol (4-NC) is a potent antioxidant whose cytotoxic potential has already been demonstrated in melanoma tumor cell lines. Furthermore, 4-NC was able to induce the accumulation of ubiquitinated proteins, including classic targets of this process such as Mcl-1. As shown for other proteasomal inhibitors in melanoma, the cytotoxic action of 4-NC is time-dependent upon the pro-apoptotic protein Noxa, which is able to bind and neutralize Mcl-1. We demonstrate the role of 4-NC as a potent inducer of ROS and p53. The use of an artificial skin model containing melanoma also provided evidence that 4-NC prevented melanoma proliferation in a 3D model that more closely resembles normal human skin.
Resumo:
In this paper, we propose three novel mathematical models for the two-stage lot-sizing and scheduling problems present in many process industries. The problem shares a continuous or quasi-continuous production feature upstream and a discrete manufacturing feature downstream, which must be synchronized. Different time-based scale representations are discussed. The first formulation encompasses a discrete-time representation. The second one is a hybrid continuous-discrete model. The last formulation is based on a continuous-time model representation. Computational tests with state-of-the-art MIP solver show that the discrete-time representation provides better feasible solutions in short running time. On the other hand, the hybrid model achieves better solutions for longer computational times and was able to prove optimality more often. The continuous-type model is the most flexible of the three for incorporating additional operational requirements, at a cost of having the worst computational performance. Journal of the Operational Research Society (2012) 63, 1613-1630. doi:10.1057/jors.2011.159 published online 7 March 2012
Resumo:
The assessment of the thermal process impact in terms of food safety and quality is of great importance for process evaluation and design. This can be accomplished from the analysis of the residence time and temperature distributions coupled with the kinetics of thermal change, or from the use of a proper time-temperature integrator (TTI) as indicator of safety and quality. The objective of this work was to develop and test enzymic TTIs with rapid detection for the evaluation of continuous HTST pasteurization processes (70-85 degrees C, 10-60 s) of low-viscosity liquid foods, such as milk and juices. Enzymes peroxidase, lactoperoxidase and alkaline phosphatase in phosphate buffer were tested and activity was determined with commercial reflectometric strips. Discontinuous thermal treatments at various time-temperature combinations were performed in order to adjust a first order kinetic model of a two-component system. The measured time-temperature history was considered instead of assuming isothermal conditions. Experiments with slow heating and cooling were used to validate the adjusted model. Only the alkaline phosphatase TTI showed potential to be used for the evaluation of pasteurization processes. The choice was based on the obtained z-values of the thermostable and thermolabile fractions, on the cost and on the validation tests. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
L’oggetto di questa tesi è l’individuazione di un metodo in grado di rilevare uno squilibrio cilindrico rilevante in un motore a combustione interna ad accensione comandata, sovralimentato dinamicamente. Il progetto si basa sull’osservazione sperimentale di un forte incremento dei valori assunti da indici basati sulla differenza dei tempi dente della ruota fonica, al manifestarsi di un brusco squilibrio di titolo in uno dei quattro cilindri. Ciò ha permesso di rilevare eventuali sbilanciamenti di titolo mediante una diagnosi intrusiva in grado di esaltarli. Questa metodologia a differenza di quelle basate sul segnale proveniente dalla sonda lambda, non risente del problema del mixing dei pacchetti di gas combusti all’interno della turbina. Il lavoro di tesi è consistito nel concepire un indice di rilevamento capace di esaltare il fenomeno sopra descritto, nel creare in ambiente Matlab-Simulink un modello che simuli la strategia in questione e renda possibile la realizzazione di un prototipo, per mezzo del quale è stata validata la strategia a bordo del veicolo. This thesis proposes a methodology to detect a relevant cylinder imbalance by means of flywheel speed fluctuation analysis in a turbocharged internal combustion engine. The main idea behind this project is the evidence that every time a significant cylinder imbalance is present, it is noticed an important increase of index based on tooth time sampled via flywheel. For this reason, it is possible to develop an intrusive strategy, which higliaghts a possible cylinder imbalance presence, in order to detect it. This method, unlike others based on the signal coming from Lambda sensor, doesn’t suffer from the presence of exhaust gases mixing effect inside the turbine. The objective of this thesis is to conceive a detection index able to put in evidence the phenomena described above, and to design a model inside the Matlab-Simulink environment, able to simulate the strategy and to make possible tests on the vehicle by means of a prototype.
Resumo:
In un mondo che richiede sempre maggiormente un'automazione delle attività della catena produttiva industriale, la computer vision rappresenta uno strumento fondamentale perciò che viene già riconosciuta internazionalmente come la Quarta Rivoluzione Industriale o Industry 4.0. Avvalendomi di questo strumento ho intrapreso presso l'azienda Syngenta lo studio della problematica della conta automatica del numero di foglie di una pianta. Il problema è stato affrontato utilizzando due differenti approcci, ispirandosi alla letteratura. All'interno dell'elaborato è presente anche la descrizione progettuale di un ulteriore metodo, ad oggi non presente in letteratura. Le metodologie saranno spiegate in dettaglio ed i risultati ottenuti saranno confrontati utilizzando i primi due approcci. Nel capitolo finale si trarranno le conclusioni sulle basi dei risultati ottenuti e dall'analisi degli stessi.
Resumo:
To clarify the circumstances of death, the degree of inebriation is of importance in many cases, but for several reasons the determination of the ethanol concentration in post-mortem samples can be challenging and the synopsis of ethanol and the direct consumption markers ethyl glucuronide (EtG) and ethyl sulphate (EtS) has proved to be useful. The use of a rather stable matrix like vitreous humor offers further advantages. The aim of this study was to determine the concentrations of ethanol and the biomarkers in the robust matrix of vitreous humor and to compare them with the respective levels in peripheral venous blood and urine. Samples of urine, blood from the femoral vein and vitreous humor were taken from 26 deceased with suspected ethanol consumption prior to death and analyzed for ethanol, EtS and EtG. In the urine samples creatinine was also determined. The personal data, the circumstances of death, the post-mortem interval and the information about ethanol consumption prior to death were recorded. EtG and EtS analysis in urine was performed by LC-ESI-MS/MS, creatinine concentration was determined using the Jaffé reaction and ethanol was detected by HS-GC-FID and by an ADH-based method. In general, the highest concentrations of the analytes were found in urine and showed statistical significance. The mean concentrations of EtG were 62.8mg/L (EtG100 206.5mg/L) in urine, 4.3mg/L in blood and 2.1mg/L in vitreous humor. EtS was found in the following mean concentrations: 54.6mg/L in urine (EtS100 123.1mg/L), 1.8mg/L in blood and 0.9mg/L in vitreous humor. Ethanol was detected in more vitreous humor samples (mean concentration 2.0g/kg) than in blood and urine (mean concentration 1.6g/kg and 2.1g/kg respectively). There was no correlation between the ethanol and the marker concentrations and no statistical conclusions could be drawn between the markers and matrices.
Resumo:
As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.
Resumo:
Brain functions, such as learning, orchestrating locomotion, memory recall, and processing information, all require glucose as a source of energy. During these functions, the glucose concentration decreases as the glucose is being consumed by brain cells. By measuring this drop in concentration, it is possible to determine which parts of the brain are used during specific functions and consequently, how much energy the brain requires to complete the function. One way to measure in vivo brain glucose levels is with a microdialysis probe. The drawback of this analytical procedure, as with many steadystate fluid flow systems, is that the probe fluid will not reach equilibrium with the brain fluid. Therefore, brain concentration is inferred by taking samples at multiple inlet glucose concentrations and finding a point of convergence. The goal of this thesis is to create a three-dimensional, time-dependent, finite element representation of the brainprobe system in COMSOL 4.2 that describes the diffusion and convection of glucose. Once validated with experimental results, this model can then be used to test parameters that experiments cannot access. When simulations were run using published values for physical constants (i.e. diffusivities, density and viscosity), the resulting glucose model concentrations were within the error of the experimental data. This verifies that the model is an accurate representation of the physical system. In addition to accurately describing the experimental brain-probe system, the model I created is able to show the validity of zero-net-flux for a given experiment. A useful discovery is that the slope of the zero-net-flux line is dependent on perfusate flow rate and diffusion coefficients, but it is independent of brain glucose concentrations. The model was simplified with the realization that the perfusate is at thermal equilibrium with the brain throughout the active region of the probe. This allowed for the assumption that all model parameters are temperature independent. The time to steady-state for the probe is approximately one minute. However, the signal degrades in the exit tubing due to Taylor dispersion, on the order of two minutes for two meters of tubing. Given an analytical instrument requiring a five μL aliquot, the smallest brain process measurable for this system is 13 minutes.
Resumo:
The use of dental processing software for computed tomography (CT) data (Dentascan) is described on postmortem (pm) CT data for the purpose of pm identification. The software allows reconstructing reformatted images comparable to conventional panoramic dental radiographs by defining a curved reconstruction line along the teeth on oblique images. Three corpses that have been scanned within the virtopsy project were used to test the software for the purpose of dental identification. In every case, dental panoramic images could be reconstructed and compared to antemortem radiographs. The images showed the basic component of teeth (enamel, dentin, and pulp), the anatomic structure of the alveolar bone, missing or unerupted teeth as well as restorations of the teeth that could be used for identification. When streak artifacts due to metal-containing dental work reduced image quality, it was still necessary to perform pm conventional radiographs for comparison of the detailed shape of the restoration. Dental identification or a dental profiling seems to become possible in a noninvasive manner using the Dentascan software.
Resumo:
When reengineering legacy systems, it is crucial to assess if the legacy behavior has been preserved or how it changed due to the reengineering effort. Ideally if a legacy system is covered by tests, running the tests on the new version can identify potential differences or discrepancies. However, writing tests for an unknown and large system is difficult due to the lack of internal knowledge. It is especially difficult to bring the system to an appropriate state. Our solution is based on the acknowledgment that one of the few trustable piece of information available when approaching a legacy system is the running system itself. Our approach reifies the execution traces and uses logic programming to express tests on them. Thereby it eliminates the need to programatically bring the system in a particular state, and handles the test-writer a high-level abstraction mechanism to query the trace. The resulting system, called TESTLOG, was used on several real-world case studies to validate our claims.
Resumo:
EPON 862 is an epoxy resin which is cured with the hardening agent DETDA to form a crosslinked epoxy polymer and is used as a component in modern aircraft structures. These crosslinked polymers are often exposed to prolonged periods of temperatures below glass transition range which cause physical aging to occur. Because physical aging can compromise the performance of epoxies and their composites and because experimental techniques cannot provide all of the necessary physical insight that is needed to fully understand physical aging, efficient computational approaches to predict the effects of physical aging on thermo-mechanical properties are needed. In this study, Molecular Dynamics and Molecular Minimization simulations are being used to establish well-equilibrated, validated molecular models of the EPON 862-DETDA epoxy system with a range of crosslink densities using a united-atom force field. These simulations are subsequently used to predict the glass transition temperature, thermal expansion coefficients, and elastic properties of each of the crosslinked systems for validation of the modeling techniques. The results indicate that glass transition temperature and elastic properties increase with increasing levels of crosslink density and the thermal expansion coefficient decreases with crosslink density, both above and below the glass transition temperature. The results also indicate that there may be an upper limit to crosslink density that can be realistically achieved in epoxy systems. After evaluation of the thermo-mechanical properties, a method is developed to efficiently establish molecular models of epoxy resins that represent the corresponding real molecular structure at specific aging times. Although this approach does not model the physical aging process, it is useful in establishing a molecular model that resembles the physically-aged state for further use in predicting thermo-mechanical properties as a function of aging time. An equation has been predicted based on the results which directly correlate aging time to aged volume of the molecular model. This equation can be helpful for modelers who want to study properties of epoxy resins at different levels of aging but have little information about volume shrinkage occurring during physical aging.