962 resultados para dust proof
Resumo:
Coinduction is a proof rule. It is the dual of induction. It allows reasoning about non--well--founded structures such as lazy lists or streams and is of particular use for reasoning about equivalences. A central difficulty in the automation of coinductive proof is the choice of a relation (called a bisimulation). We present an automation of coinductive theorem proving. This automation is based on the idea of proof planning. Proof planning constructs the higher level steps in a proof, using knowledge of the general structure of a family of proofs and exploiting this knowledge to control the proof search. Part of proof planning involves the use of failure information to modify the plan by the use of a proof critic which exploits the information gained from the failed proof attempt. Our approach to the problem was to develop a strategy that makes an initial simple guess at a bisimulation and then uses generalisation techniques, motivated by a critic, to refine this guess, so that a larger class of coinductive problems can be automatically verified. The implementation of this strategy has focused on the use of coinduction to prove the equivalence of programs in a small lazy functional language which is similar to Haskell. We have developed a proof plan for coinduction and a critic associated with this proof plan. These have been implemented in CoClam, an extended version of Clam with encouraging results. The planner has been successfully tested on a number of theorems.
Resumo:
The second generation of large scale interferometric gravitational wave (GW) detectors will be limited by quantum noise over a wide frequency range in their detection band. Further sensitivity improvements for future upgrades or new detectors beyond the second generation motivate the development of measurement schemes to mitigate the impact of quantum noise in these instruments. Two strands of development are being pursued to reach this goal, focusing both on modifications of the well-established Michelson detector configuration and development of different detector topologies. In this paper, we present the design of the world's first Sagnac speed meter (SSM) interferometer, which is currently being constructed at the University of Glasgow. With this proof-of-principle experiment we aim to demonstrate the theoretically predicted lower quantum noise in a Sagnac interferometer compared to an equivalent Michelson interferometer, to qualify SSM for further research towards an implementation in a future generation large scale GW detector, such as the planned Einstein telescope observatory.
Resumo:
Presentation from the MARAC conference in Pittsburgh, PA on April 14–16, 2016. S15 - The Duchamp Research Portal: Moving an Idea to Proof of Concept.
Resumo:
International audience
Resumo:
"Reference data publication."
Resumo:
OBJECTIVES: Due to the high prevalence of renal failure in transcatheter aortic valve replacement (TAVR) candidates, a non-contrast MR technique is desirable for pre-procedural planning. We sought to evaluate the feasibility of a novel, non-contrast, free-breathing, self-navigated three-dimensional (SN3D) MR sequence for imaging the aorta from its root to the iliofemoral run-off in comparison to non-contrast two-dimensional-balanced steady-state free-precession (2D-bSSFP) imaging. METHODS: SN3D [field of view (FOV), 220-370 mm(3); slice thickness, 1.15 mm; repetition/echo time (TR/TE), 3.1/1.5 ms; and flip angle, 115°] and 2D-bSSFP acquisitions (FOV, 340 mm; slice thickness, 6 mm; TR/TE, 2.3/1.1 ms; flip angle, 77°) were performed in 10 healthy subjects (all male; mean age, 30.3 ± 4.3 yrs) using a 1.5-T MRI system. Aortic root measurements and qualitative image ratings (four-point Likert-scale) were compared. RESULTS: The mean effective aortic annulus diameter was similar for 2D-bSSFP and SN3D (26.7 ± 0.7 vs. 26.1 ± 0.9 mm, p = 0.23). The mean image quality of 2D-bSSFP (4; IQR 3-4) was rated slightly higher (p = 0.03) than SN3D (3; IQR 2-4). The mean total acquisition time for SN3D imaging was 12.8 ± 2.4 min. CONCLUSIONS: Our results suggest that a novel SN3D sequence allows rapid, free-breathing assessment of the aortic root and the aortoiliofemoral system without administration of contrast medium. KEY POINTS: • The prevalence of renal failure is high among TAVR candidates. • Non-contrast 3D MR angiography allows for TAVR procedure planning. • The self-navigated sequence provides a significantly reduced scanning time.
Resumo:
Standards of proof in law serve the purpose of instructing juries as to the expected levels of confidence in determinations of fact. In criminal trials, to reach a guilty verdict a jury must be satisfied beyond a reasonable doubt, and in civil trials by a preponderance of the evidence. The purposes of this study are to determine the quantitative thresholds used to make these determinations; to ascertain the levels of juror agreement with basic principles of justice; and to try to predict thresholds and beliefs by juror personality characteristics. Participants read brief case descriptions and indicated thresholds in percentages, their beliefs in various principles, and completed three personality measures. A 92-94% threshold in criminal and an 80% threshold in civil matters was found; but prediction by personality was not supported. Significant percentages of jurors disavowed the presumptions of innocence and right to counsel.
Resumo:
Dust attenuation affects nearly all observational aspects of galaxy evolution, yet very little is known about the form of the dust-attenuation law in the distant universe. Here, we model the spectral energy distributions of galaxies at z ~ 1.5–3 from CANDELS with rest-frame UV to near-IR imaging under different assumptions about the dust law, and compare the amount of inferred attenuated light with the observed infrared (IR) luminosities. Some individual galaxies show strong Bayesian evidence in preference of one dust law over another, and this preference agrees with their observed location on the plane of infrared excess (IRX, L_TIR/L_UV) and UV slope (β). We generalize the shape of the dust law with an empirical model, A_ λ,σ =E(B-V)k_ λ (λ / λ v)^ σ where k_λ is the dust law of Calzetti et al., and show that there exists a correlation between the color excess E(B-V) and tilt δ with δ =(0.62±0.05)log(E(B-V))+(0.26±0.02). Galaxies with high color excess have a shallower, starburst-like law, and those with low color excess have a steeper, SMC-like law. Surprisingly, the galaxies in our sample show no correlation between the shape of the dust law and stellar mass, star formation rate, or β. The change in the dust law with color excess is consistent with a model where attenuation is caused by scattering, a mixed star–dust geometry, and/or trends with stellar population age, metallicity, and dust grain size. This rest-frame UV-to-near-IR method shows potential to constrain the dust law at even higher redshifts (z>3).
Resumo:
Selling devices on retail stores comes with the big challenge of grabbing the customer’s attention. Nowadays people have a lot of offers at their disposal and new marketing techniques must emerge to differentiate the products. When it comes to smartphones and tablets, those devices can make the difference by themselves, if we use their computing power and capabilities to create something unique and interactive. With that in mind, three prototypes were developed during an internship: a face recognition based Customer Detection, a face tracking solution with an Avatar and interactive cross-app Guides. All three revealed to have potential to be differentiating solutions in a retail store, not only raising the chance of a customer taking notice of the device but also of interacting with them to learn more about their features. The results were meant to be only proof of concepts and therefore were not tested in the real world.
Resumo:
This paper analyses the influence of the extreme Saharan desert dust (DD) event on shortwave (SW) and longwave (LW) radiation at the EARLINET/AERONET Évora station (Southern Portugal) from 4 up to 7 April 2011. There was also some cloud occurrence in the period. In this context, it is essential to quantify the effect of cloud presence on aerosol radiative forcing. A radiative transfer model was initialized with aerosol optical properties, cloud vertical properties and meteorological atmospheric vertical profiles. The intercomparison between the instantaneous TOA shortwave and longwave fluxes derived using CERES and those calculated using SBDART, which was fed with aerosol extinction coefficients derived from the CALIPSO and lidar-PAOLI observations, varying OPAC dataset parameters, was reasonably acceptable within the standard deviations. The dust aerosol type that yields the best fit was found to be the mineral accumulation mode. Therefore, SBDART model constrained with the CERES observations can be used to reliably determine aerosol radiative forcing and heating rates. Aerosol radiative forcings and heating rates were derived in the SW (ARFSw, AHRSw) and LW (ARFLw, AHRLw) spectral ranges, considering a cloud-aerosol free reference atmosphere. We found that AOD at 440 nm increased by a factor of 5 on 6 April with respect to the lower dust load on 4 April. It was responsible by a strong cooling radiative effect pointed out by the ARFSw value (−99 W/m2 for a solar zenith angle of 60°) offset by a warming radiative effect according to ARFLw value (+21.9 W/m2) at the surface. Overall, about 24% and 12% of the dust solar radiative cooling effect is compensated by its longwave warming effect at the surface and at the top of the atmosphere, respectively. Hence, larger aerosol loads could enhance the response between the absorption and re-emission processes increasing the ARFLw with respect to those associated with moderate and low aerosol loads. The unprecedented results derived from this work complement the findings in other regions on the modifications of radiative energy budget by the dust aerosols, which could have relevant influences on the regional climate and will be topics for future investigations.
Resumo:
The modeling of metal dust explosion phenomenon is important in order to safeguard industries from potential accidents. A key parameter of these models is the burning velocity, which represents the consumption rate of the reactants by the flame front, during the combustion process. This work is focused on the experimental determination of aluminium burning velocity, through an alternative method, called "Direct method". The study of the methods used and the results obtained is preceded by a general analysis on dust explosion phenomenon, flame propagation phenomenon, characteristics of the metals combustion process and standard methods for determining the burning velocity. The “Direct method” requires a flame propagating through a tube recorded by high-speed cameras. Thus, the flame propagation test is carried out inside a vertical prototype made of glass. The study considers two optical technique: the direct visualization of the light emitted by the flame and the Particle Image Velocimetry (PIV) technique. These techniques were used simultaneously and allow the determination of two velocities: the flame propagation velocity and the flow velocity of the unburnt mixture. Since the burning velocity is defined by these two quantities, its direct determination is done by substracting the flow velocity of the fresh mixture from the flame propagation velocity. The results obtained by this direct determination, are approximated by a linear curve and different non-linear curves, which show a fluctuating behaviour of burning velocity. Furthermore, the burning velocity is strongly affected by turbulence. Turbulence intensity can be evaluated from PIV technique data. A comparison between burning velocity and turbulence intensity highlighted that both have a similar trend.
Resumo:
Introduction. Synthetic cannabinoid receptor agonists (SCRAs) represent the widest group of New Psychoactive Substances (NPS) and, around 2021-2022, new compounds emerged on the market. The aims of the present research were to identify suitable urinary markers of Cumyl-CB-MEGACLONE, Cumyl-NB-MEGACLONE, Cumyl-NB-MINACA, 5F-EDMB-PICA, EDMB-PINACA and ADB-HEXINACA, to present data on their prevalence and to adapt the methodology from the University of Freiburg to the University of Bologna. Materials and methods. Human phase-I metabolites detected in 46 authentic urine samples were confirmed in vitro with pooled human liver microsomes (pHLM) assays, analyzed by liquid chromatography-quadrupole time-of-flight mass spectrometry (LC-qToF-MS). Prevalence data were obtained from urines collected for abstinence control programs. The method to study SCRAs metabolism in use at the University of Freiburg was adapted to the local facilities, tested in vitro with 5F-EDMB-PICA and applied to the study of ADB-HEXINACA metabolism. Results. Metabolites built by mono, di- and tri-hydroxylation were recommended as specific urinary biomarkers to monitor the consumption of SCRAs bearing a cumyl moiety. Monohydroxylated and defluorinated metabolites were suitable proof of 5F-EDMB-PICA consumption. Products of monohydroxylation and amide or ester hydrolysis, coupled to monohydroxylation or ketone formation, were recognized as specific markers for EDMB-PINACA and ADB-HEXINACA. The LC-qToF-MS method was successfully adapted to the University of Bologna, as tested with 5F-EDMB-PICA in vitro metabolites. Prevalence data showed that 5F-EDMB-PINACA and EDMB-PINACA were more prevalent than ADB-HEXINACA, but for a limited period. Conclusion. Due to undetectability of parent compounds in urines and to shared metabolites among structurally related compounds, the identification of specific urinary biomarkers as unequivocal proofs of SCRAs consumption remains challenging for forensic laboratories. Urinary biomarkers are necessary to monitor SCRAs abuse and prevalence data could help in establishing tailored strategies to prevent their spreading, highlighting the role for legal medicine as a service to public health.
Resumo:
Le applicazioni che offrono servizi sulla base della posizione degli utenti sono sempre più utilizzate, a partire dal navigatore fino ad arrivare ai sistemi di trasporto intelligenti (ITS) i quali permetteranno ai veicoli di comunicare tra loro. Alcune di questi servizi permettono perfino di ottenere qualche incentivo se l'utente visita o passa per determinate zone. Per esempio un negozio potrebbe offrire dei coupon alle persone che si trovano nei paraggi. Tuttavia, la posizione degli utenti è facilmente falsificabile, ed in quest'ultima tipologia di servizi, essi potrebbero ottenere gli incentivi in modo illecito, raggirando il sistema. Diviene quindi necessario implementare un'architettura in grado di impedire alle persone di falsificare la loro posizione. A tal fine, numerosi lavori sono stati proposti, i quali delegherebbero la realizzazione di "prove di luogo" a dei server centralizzati oppure collocherebbero degli access point in grado di rilasciare prove o certificati a quegli utenti che si trovano vicino. In questo lavoro di tesi abbiamo ideato un'architettura diversa da quelle dei lavori correlati, cercando di utilizzare le funzionalità offerte dalla tecnologia blockchain e dalla memorizzazione distribuita. In questo modo è stato possibile progettare una soluzione che fosse decentralizzata e trasparente, assicurando l'immutabilità dei dati mediante l'utilizzo della blockchain. Inoltre, verrà dettagliato un'idea di caso d'uso da realizzare utilizzando l'architettura da noi proposta, andando ad evidenziare i vantaggi che, potenzialmente, si potrebbero trarre da essa. Infine, abbiamo implementato parte del sistema in questione, misurando i tempi ed i costi richiesti dalle transazioni su alcune delle blockchain disponibili al giorno d'oggi, utilizzando le infrastrutture messe a disposizione da Ethereum, Polygon e Algorand.