945 resultados para DENSITY ANALYSIS
Resumo:
Recent palaeoglaciological studies on the West Antarctic shelf have mainly focused on the wide embayments of the Ross and Amundsen seas in order to reconstruct the extent and subsequent retreat of the West Antarctic Ice Sheet (WAIS) since the Last Glacial Maximum (LGM). However, the narrower shelf sectors between these two major embayments have remained largely unstudied in previous geological investigations despite them covering extensive areas of the West Antarctic shelf. Here, we present the first systematic marine geological and geophysical survey of a shelf sector offshore from the Hobbs Coast. It is dominated by a large grounding zone wedge (GZW), which fills the base of a palaeo-ice stream trough on the inner shelf and marks a phase of stabilization of the grounding line during general WAIS retreat following the last maximum ice-sheet extent in this particular area (referred to as the Local Last Glacial Maximum, 'LLGM'). Reliable age determination on calcareous microfossils from the infill of a subglacial meltwater channel eroded into the GZW reveals that grounded ice had retreated landward of the GZW before ~20.88 cal. ka BP, with deglaciation of the innermost shelf occurring prior to ~12.97 cal. ka BP. Geophysical sub-bottom information from the inner-, mid- and outer shelf indicates grounded ice extended to the shelf edge prior to the formation of the GZW. Assuming the wedge was deposited during deglaciation, we infer the timing of maximum grounded ice extent occurred before ~20.88 cal. ka BP. This could suggest that the WAIS retreat from the outer shelf was already underway during or even prior to the global LGM (~23-19 cal. ka BP). Our new findings give insights into the regional deglacial behaviour of this understudied part of the West Antarctic shelf and at the same time support early deglaciation ages recently presented for adjacent drainage sectors of the WAIS. If correct, these findings contrast with the hypothesis that initial deglaciation of Antarctic Ice Sheets occurred synchronously at ~19 cal. ka BP.
Resumo:
The hydrologic system beneath the Antarctic Ice Sheet is thought to influence both the dynamics and distribution of fast flowing ice streams, which discharge most of the ice lost by the ice sheet. Despite considerable interest in understanding this subglacial network and its affect on ice flow, in situ observations from the ice sheet bed are exceedingly rare. Here we describe the first sediment cores recovered from an active subglacial lake. The lake, known as Subglacial Lake Whillans, is part of a broader, dynamic hydrologic network beneath the Whillans Ice Stream in West Antarctica. Even though "floods" pass through the lake, the lake floor shows no evidence of erosion or deposition by flowing water. By inference, these floods must have insufficient energy to erode or transport significant volumes of sediment coarser than silt. Consequently, water flow beneath the region is probably incapable of incising continuous channels into the bed and instead follows preexisting subglacial topography and surface slope. Sediment on the lake floor consists of till deposited during intermittent grounding of the ice stream following flood events. The fabrics within the till are weaker than those thought to develop in thick deforming beds suggesting subglacial sediment fluxes across the ice plain are currently low and unlikely to have a large stabilizing effect on the ice stream's grounding zone.
Resumo:
Anthropogenic CO2 emissions are acidifying the world's oceans. A growing body of evidence is showing that ocean acidification impacts growth and developmental rates of marine invertebrates. Here we test the impact of elevated seawater pCO2 (129 Pa, 1271 µatm) on early development, larval metabolic and feeding rates in a marine model organism, the sea urchin Strongylocentrotus purpuratus. Growth and development was assessed by measuring total body length, body rod length, postoral rod length and posterolateral rod length. Comparing these parameters between treatments suggests that larvae suffer from a developmental delay (by ca. 8%) rather than from the previously postulated reductions in size at comparable developmental stages. Further, we found maximum increases in respiration rates of + 100 % under elevated pCO2, while body length corrected feeding rates did not differ between larvae from both treatments. Calculating scope for growth illustrates that larvae raised under high pCO2 spent an average of 39 to 45% of the available energy for somatic growth, while control larvae could allocate between 78 and 80% of the available energy into growth processes. Our results highlight the importance of defining a standard frame of reference when comparing a given parameter between treatments, as observed differences can be easily due to comparison of different larval ages with their specific set of biological characters.
Resumo:
Background: Esophageal adenocarcinoma (EA) is one of the fastest rising cancers in western countries. Barrett’s Esophagus (BE) is the premalignant precursor of EA. However, only a subset of BE patients develop EA, which complicates the clinical management in the absence of valid predictors. Genetic risk factors for BE and EA are incompletely understood. This study aimed to identify novel genetic risk factors for BE and EA.Methods: Within an international consortium of groups involved in the genetics of BE/EA, we performed the first meta-analysis of all genome-wide association studies (GWAS) available, involving 6,167 BE patients, 4,112 EA patients, and 17,159 representative controls, all of European ancestry, genotyped on Illumina high-density SNP-arrays, collected from four separate studies within North America, Europe, and Australia. Meta-analysis was conducted using the fixed-effects inverse variance-weighting approach. We used the standard genome-wide significant threshold of 5×10-8 for this study. We also conducted an association analysis following reweighting of loci using an approach that investigates annotation enrichment among the genome-wide significant loci. The entire GWAS-data set was also analyzed using bioinformatics approaches including functional annotation databases as well as gene-based and pathway-based methods in order to identify pathophysiologically relevant cellular pathways.Findings: We identified eight new associated risk loci for BE and EA, within or near the CFTR (rs17451754, P=4·8×10-10), MSRA (rs17749155, P=5·2×10-10), BLK (rs10108511, P=2·1×10-9), KHDRBS2 (rs62423175, P=3·0×10-9), TPPP/CEP72 (rs9918259, P=3·2×10-9), TMOD1 (rs7852462, P=1·5×10-8), SATB2 (rs139606545, P=2·0×10-8), and HTR3C/ABCC5 genes (rs9823696, P=1·6×10-8). A further novel risk locus at LPA (rs12207195, posteriori probability=0·925) was identified after re-weighting using significantly enriched annotations. This study thereby doubled the number of known risk loci. The strongest disease pathways identified (P<10-6) belong to muscle cell differentiation and to mesenchyme development/differentiation, which fit with current pathophysiological BE/EA concepts. To our knowledge, this study identified for the first time an EA-specific association (rs9823696, P=1·6×10-8) near HTR3C/ABCC5 which is independent of BE development (P=0·45).Interpretation: The identified disease loci and pathways reveal new insights into the etiology of BE and EA. Furthermore, the EA-specific association at HTR3C/ABCC5 may constitute a novel genetic marker for the prediction of transition from BE to EA. Mutations in CFTR, one of the new risk loci identified in this study, cause cystic fibrosis (CF), the most common recessive disorder in Europeans. Gastroesophageal reflux (GER) belongs to the phenotypic CF-spectrum and represents the main risk factor for BE/EA. Thus, the CFTR locus may trigger a common GER-mediated pathophysiology.
Resumo:
A small scale sample nuclear waste package, consisting of a 28 mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500 keV), with a source size of <0.5 mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30 cm2 scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10 Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned.
Resumo:
In many countries wind energy has become an indispensable part of the electricity generation mix. The opportunity for ground based wind turbine systems are becoming more and more constrained due to limitations on turbine hub heights, blade lengths and location restrictions linked to environmental and permitting issues including special areas of conservation and social acceptance due to the visual and noise impacts. In the last decade there have been numerous proposals to harness high altitude winds, such as tethered kites, airfoils and dirigible based rotors. These technologies are designed to operate above the neutral atmospheric boundary layer of 1,300 m, which are subject to more powerful and persistent winds thus generating much higher electricity capacities. This paper presents an in-depth review of the state-of-the-art of high altitude wind power, evaluates the technical and economic viability of deploying high altitude wind power as a resource in Northern Ireland and identifies the optimal locations through considering wind data and geographical constraints. The key findings show that the total viable area over Northern Ireland for high altitude wind harnessing devices is 5109.6 km2, with an average wind power density of 1,998 W/m2 over a 20-year span, at a fixed altitude of 3,000 m. An initial budget for a 2MW pumping kite device indicated a total cost £1,751,402 thus proving to be economically viable with other conventional wind-harnessing devices.
Resumo:
Energy efficiency improvement has been a key objective of China’s long-term energy policy. In this paper, we derive single-factor technical energy efficiency (abbreviated as energy efficiency) in China from multi-factor efficiency estimated by means of a translog production function and a stochastic frontier model on the basis of panel data on 29 Chinese provinces over the period 2003–2011. We find that average energy efficiency has been increasing over the research period and that the provinces with the highest energy efficiency are at the east coast and the ones with the lowest in the west, with an intermediate corridor in between. In the analysis of the determinants of energy efficiency by means of a spatial Durbin error model both factors in the own province and in first-order neighboring provinces are considered. Per capita income in the own province has a positive effect. Furthermore, foreign direct investment and population density in the own province and in neighboring provinces have positive effects, whereas the share of state-owned enterprises in Gross Provincial Product in the own province and in neighboring provinces has negative effects. From the analysis it follows that inflow of foreign direct investment and reform of state-owned enterprises are important policy handles.
Resumo:
Gate-tunable two-dimensional (2D) materials-based quantum capacitors (QCs) and van der Waals heterostructures involve tuning transport or optoelectronic characteristics by the field effect. Recent studies have attributed the observed gate-tunable characteristics to the change of the Fermi level in the first 2D layer adjacent to the dielectrics, whereas the penetration of the field effect through the one-molecule-thick material is often ignored or oversimplified. Here, we present a multiscale theoretical approach that combines first-principles electronic structure calculations and the Poisson–Boltzmann equation methods to model penetration of the field effect through graphene in a metal–oxide–graphene–semiconductor (MOGS) QC, including quantifying the degree of “transparency” for graphene two-dimensional electron gas (2DEG) to an electric displacement field. We find that the space charge density in the semiconductor layer can be modulated by gating in a nonlinear manner, forming an accumulation or inversion layer at the semiconductor/graphene interface. The degree of transparency is determined by the combined effect of graphene quantum capacitance and the semiconductor capacitance, which allows us to predict the ranking for a variety of monolayer 2D materials according to their transparency to an electric displacement field as follows: graphene > silicene > germanene > WS2 > WTe2 > WSe2 > MoS2 > phosphorene > MoSe2 > MoTe2, when the majority carrier is electron. Our findings reveal a general picture of operation modes and design rules for the 2D-materials-based QCs.
Resumo:
The present study was done in collaboration with J. Faria e Filhos company, a Madeira wine producer, and its main goal was to fully characterize three wines produced during 2014 harvest and identify possible improving points in the winemaking process. The winemaking process was followed during 4 weeks, being registered the amounts of grapes received, the fermentation temperatures, the time at which fermentation was stopped and evolution of must densities until the fortification time. The characterization of musts and wines was done in terms of density, total and volatile acidity, alcohol content, pH, total of polyphenol, organic acids composition, sugars concentration and the volatile profile. Also, it was developed and validated an analytical methodology to quantify the volatile fatty acids, namely using SPME-GC-MS. Briefly, the following key features were obtained for the latter methodology: linearity (R2=0.999) e high sensitivity (LOD =0.026-0.068 mg/L), suitable precision (repeatability and reproducibility lower than 8,5%) and good recoveries (103,11-119,46%). The results reveal that fermentation temperatures should be controlled in a more strictly manner, in order to ensure a better balance in proportion of some volatile compounds, namely the esters and higher alcohols and to minimize the concentration of some volatiles, namely hexanoic, octanoic and decanoic acids, that when above their odours threshold are not positive for the wine aroma. Also, regarding the moment to stop the fermentation, it was verified that it can be introduced changes which can also be benefit to guarantee the tipicity of Madeira wine bouquet.
Resumo:
Verbal fluency is the ability to produce a satisfying sequence of spoken words during a given time interval. The core of verbal fluency lies in the capacity to manage the executive aspects of language. The standard scores of the semantic verbal fluency test are broadly used in the neuropsychological assessment of the elderly, and different analytical methods are likely to extract even more information from the data generated in this test. Graph theory, a mathematical approach to analyze relations between items, represents a promising tool to understand a variety of neuropsychological states. This study reports a graph analysis of data generated by the semantic verbal fluency test by cognitively healthy elderly (NC), patients with Mild Cognitive Impairment – subtypes amnestic(aMCI) and amnestic multiple domain (a+mdMCI) - and patients with Alzheimer’s disease (AD). Sequences of words were represented as a speech graph in which every word corresponded to a node and temporal links between words were represented by directed edges. To characterize the structure of the data we calculated 13 speech graph attributes (SGAs). The individuals were compared when divided in three (NC – MCI – AD) and four (NC – aMCI – a+mdMCI – AD) groups. When the three groups were compared, significant differences were found in the standard measure of correct words produced, and three SGA: diameter, average shortest path, and network density. SGA sorted the elderly groups with good specificity and sensitivity. When the four groups were compared, the groups differed significantly in network density, except between the two MCI subtypes and NC and aMCI. The diameter of the network and the average shortest path were significantly different between the NC and AD, and between aMCI and AD. SGA sorted the elderly in their groups with good specificity and sensitivity, performing better than the standard score of the task. These findings provide support for a new methodological frame to assess the strength of semantic memory through the verbal fluency task, with potential to amplify the predictive power of this test. Graph analysis is likely to become clinically relevant in neurology and psychiatry, and may be particularly useful for the differential diagnosis of the elderly.
Resumo:
With the world of professional sports shifting towards employing better sport analytics, the demand for vision-based performance analysis is growing increasingly in recent years. In addition, the nature of many sports does not allow the use of any kind of sensors or other wearable markers attached to players for monitoring their performances during competitions. This provides a potential application of systematic observations such as tracking information of the players to help coaches to develop their visual skills and perceptual awareness needed to make decisions about team strategy or training plans. My PhD project is part of a bigger ongoing project between sport scientists and computer scientists involving also industry partners and sports organisations. The overall idea is to investigate the contribution technology can make to the analysis of sports performance on the example of team sports such as rugby, football or hockey. A particular focus is on vision-based tracking, so that information about the location and dynamics of the players can be gained without any additional sensors on the players. To start with, prior approaches on visual tracking are extensively reviewed and analysed. In this thesis, methods to deal with the difficulties in visual tracking to handle the target appearance changes caused by intrinsic (e.g. pose variation) and extrinsic factors, such as occlusion, are proposed. This analysis highlights the importance of the proposed visual tracking algorithms, which reflect these challenges and suggest robust and accurate frameworks to estimate the target state in a complex tracking scenario such as a sports scene, thereby facilitating the tracking process. Next, a framework for continuously tracking multiple targets is proposed. Compared to single target tracking, multi-target tracking such as tracking the players on a sports field, poses additional difficulties, namely data association, which needs to be addressed. Here, the aim is to locate all targets of interest, inferring their trajectories and deciding which observation corresponds to which target trajectory is. In this thesis, an efficient framework is proposed to handle this particular problem, especially in sport scenes, where the players of the same team tend to look similar and exhibit complex interactions and unpredictable movements resulting in matching ambiguity between the players. The presented approach is also evaluated on different sports datasets and shows promising results. Finally, information from the proposed tracking system is utilised as the basic input for further higher level performance analysis such as tactics and team formations, which can help coaches to design a better training plan. Due to the continuous nature of many team sports (e.g. soccer, hockey), it is not straightforward to infer the high-level team behaviours, such as players’ interaction. The proposed framework relies on two distinct levels of performance analysis: low-level performance analysis, such as identifying players positions on the play field, as well as a high-level analysis, where the aim is to estimate the density of player locations or detecting their possible interaction group. The related experiments show the proposed approach can effectively explore this high-level information, which has many potential applications.
Resumo:
Incidental findings on low-dose CT images obtained during hybrid imaging are an increasing phenomenon as CT technology advances. Understanding the diagnostic value of incidental findings along with the technical limitations is important when reporting image results and recommending follow-up, which may result in an additional radiation dose from further diagnostic imaging and an increase in patient anxiety. This study assessed lesions incidentally detected on CT images acquired for attenuation correction on two SPECT/CT systems. Methods: An anthropomorphic chest phantom containing simulated lesions of varying size and density was imaged on an Infinia Hawkeye 4 and a Symbia T6 using the low-dose CT settings applied for attenuation correction acquisitions in myocardial perfusion imaging. Twenty-two interpreters assessed 46 images from each SPECT/CT system (15 normal images and 31 abnormal images; 41 lesions). Data were evaluated using a jackknife alternative free-response receiver-operating-characteristic analysis (JAFROC). Results: JAFROC analysis showed a significant difference (P < 0.0001) in lesion detection, with the figures of merit being 0.599 (95% confidence interval, 0.568, 0.631) and 0.810 (95% confidence interval, 0.781, 0.839) for the Infinia Hawkeye 4 and Symbia T6, respectively. Lesion detection on the Infinia Hawkeye 4 was generally limited to larger, higher-density lesions. The Symbia T6 allowed improved detection rates for midsized lesions and some lower-density lesions. However, interpreters struggled to detect small (5 mm) lesions on both image sets, irrespective of density. Conclusion: Lesion detection is more reliable on low-dose CT images from the Symbia T6 than from the Infinia Hawkeye 4. This phantom-based study gives an indication of potential lesion detection in the clinical context as shown by two commonly used SPECT/CT systems, which may assist the clinician in determining whether further diagnostic imaging is justified.
Resumo:
We propose a novel analysis alternative, based on two Fourier Transforms for emotion recognition from speech -- Fourier analysis allows for display and synthesizes different signals, in terms of power spectral density distributions -- A spectrogram of the voice signal is obtained performing a short time Fourier Transform with Gaussian windows, this spectrogram portraits frequency related features, such as vocal tract resonances and quasi-periodic excitations during voiced sounds -- Emotions induce such characteristics in speech, which become apparent in spectrogram time-frequency distributions -- Later, the signal time-frequency representation from spectrogram is considered an image, and processed through a 2-dimensional Fourier Transform in order to perform the spatial Fourier analysis from it -- Finally features related with emotions in voiced speech are extracted and presented
Resumo:
Biodiversity and distribution of benthic meiofauna in the sediments of the Southern Caspian Sea (Mazandaran) was studied in order to introducing and determining of their relationship with the environmental factors. From 12 stations (ranging in depths 5, 10, 20 and 50 meters), sediment samples were gathered in 6 months (2012). Environmental factors of water near the bottom including temperature, salinity, dissolved oxygen and pH were measured during sampling with CTD and grain size and total organic matter percentage and calcium carbonate were measured in laboratory. In different months, the average water temperature (9.52-23.93), dissolved oxygen (7.71-10.53 mg/L), salinity (10.57±0/07 and 10.75±0/04 ppt), pH (7.44±0/29 and 7.41±0/22), EC (17.97±0/12 and 18.30±0/04μs/cm2), TDS (8.92±0/04 and 9.14±0/02 mg/L), total organic matter (5.83±1/43 and 6.25±0/97%) and calcium carbonate (2.36±0/36 and 1.68±0/19%) were measured respectively. Structure of the sediment samples mostly consisted of fine sand; very fine sand, silt and clay. From the 4 group animals (Foraminifera, Crustacea, Worms and Mollusca), there were identified 40species belong to 29 genera of 25 families. The cosmopolitan foraminifer, Ammonia beccarii caspica, was common in all sampling stations. Result showed that depth was important factor on distribution of meiofauna. Most density of foraminifera and crustacean was observed in depth of 20m and for mollusca and worms observed in 5m. Shannon diversity index decreased with depth that showed in shallow water diversity was higher than deep water. Mean of maximum and minimum Shannon index was obsorvers in depth of 5m and 50 m that was measured in order 0.93 and 0.43. Account of Shannon index showed that this area is under pressure. Account of peioleo index showed distribution in this area was not steady.