994 resultados para sediment reduction
Resumo:
Capacity reduction programmes, in the form of buybacks or decommissioning, have had relatively widespread application in fisheries in the US, Europe and Australia. A common criticism of such programmes is that they remove the least efficient vessels first, resulting in an increase in average efficiency of the remaining fleet, which tends to increase the effective fishing power of the remaining fleet. In this paper, the effects of a buyback programme on average technical efficiency in Australia’s Northern Prawn Fishery are examined using a multi-output production function approach with an explicit inefficiency model. As expected, the results indicate that average efficiency of the remaining vessels was generally greater than that of the removed vessels. Further, there was some evidence of an increase in average scale efficiency in the fleet as the remaining vessels were closer, on average, to the optimal scale. Key factors affecting technical efficiency included company structure and the number of vessels fishing. In regard to fleet size, our model suggests positive externalities associated with more boats fishing at any point in time (due to information sharing and reduced search costs), but also negative externalities due to crowding, with the latter effect dominating the former. Hence, the buyback resulted in a net increase in the individual efficiency of the remaining vessels due to reduced crowding, as well as raising average efficiency through removal of less efficient vessels.
Accelerometer data reduction : a comparison of four reduction algorithms on select outcome variables
Resumo:
Purpose Accelerometers are recognized as a valid and objective tool to assess free-living physical activity. Despite the widespread use of accelerometers, there is no standardized way to process and summarize data from them, which limits our ability to compare results across studies. This paper a) reviews decision rules researchers have used in the past, b) compares the impact of using different decision rules on a common data set, and c) identifies issues to consider for accelerometer data reduction. Methods The methods sections of studies published in 2003 and 2004 were reviewed to determine what decision rules previous researchers have used to identify wearing period, minimal wear requirement for a valid day, spurious data, number of days used to calculate the outcome variables, and extract bouts of moderate to vigorous physical activity (MVPA). For this study, four data reduction algorithms that employ different decision rules were used to analyze the same data set. Results The review showed that among studies that reported their decision rules, much variability was observed. Overall, the analyses suggested that using different algorithms impacted several important outcome variables. The most stringent algorithm yielded significantly lower wearing time, the lowest activity counts per minute and counts per day, and fewer minutes of MVPA per day. An exploratory sensitivity analysis revealed that the most stringent inclusion criterion had an impact on sample size and wearing time, which in turn affected many outcome variables. Conclusions These findings suggest that the decision rules employed to process accelerometer data have a significant impact on important outcome variables. Until guidelines are developed, it will remain difficult to compare findings across studies
Resumo:
Background Accelerometers have become one of the most common methods of measuring physical activity (PA). Thus, validity of accelerometer data reduction approaches remains an important research area. Yet, few studies directly compare data reduction approaches and other PA measures in free-living samples. Objective To compare PA estimates provided by 3 accelerometer data reduction approaches, steps, and 2 self-reported estimates: Crouter's 2-regression model, Crouter's refined 2-regression model, the weighted cut-point method adopted in the National Health and Nutrition Examination Survey (NHANES; 2003-2004 and 2005-2006 cycles), steps, IPAQ, and 7-day PA recall. Methods A worksite sample (N = 87) completed online-surveys and wore ActiGraph GT1M accelerometers and pedometers (SW-200) during waking hours for 7 consecutive days. Daily time spent in sedentary, light, moderate, and vigorous intensity activity and percentage of participants meeting PA recommendations were calculated and compared. Results Crouter's 2-regression (161.8 +/- 52.3 minutes/day) and refined 2-regression (137.6 +/- 40.3 minutes/day) models provided significantly higher estimates of moderate and vigorous PA and proportions of those meeting PA recommendations (91% and 92%, respectively) as compared with the NHANES weighted cut-point method (39.5 +/- 20.2 minutes/day, 18%). Differences between other measures were also significant. Conclusions When comparing 3 accelerometer cut-point methods, steps, and self-report measures, estimates of PA participation vary substantially.
Resumo:
Background A feature of epithelial to mesenchymal transition (EMT) relevant to tumour dissemination is the reorganization of actin cytoskeleton/focal contacts, influencing cellular ECM adherence and motility. This is coupled with the transcriptional repression of E-cadherin, often mediated by Snail1, Snail2 and Zeb1/δEF1. These genes, overexpressed in breast carcinomas, are known targets of growth factor-initiated pathways, however it is less clear how alterations in ECM attachment cross-modulate to regulate these pathways. EGF induces EMT in the breast cancer cell line PMC42-LA and the kinase inhibitor staurosporine (ST) induces EMT in embryonic neural epithelial cells, with F-actin de-bundling and disruption of cell-cell adhesion, via inhibition of aPKC. Methods PMC42-LA cells were treated for 72 h with 10 ng/ml EGF, 40 nM ST, or both, and assessed for expression of E-cadherin repressor genes (Snail1, Snail2, Zeb1/δEF1) and EMT-related genes by QRT-PCR, multiplex tandem PCR (MT-PCR) and immunofluorescence +/- cycloheximide. Actin and focal contacts (paxillin) were visualized by confocal microscopy. A public database of human breast cancers was assessed for expression of Snail1 and Snail2 in relation to outcome. Results When PMC42-LA were treated with EGF, Snail2 was the principal E-cadherin repressor induced. With ST or ST+EGF this shifted to Snail1, with more extreme EMT and Zeb1/δEF1 induction seen with ST+EGF. ST reduced stress fibres and focal contact size rapidly and independently of gene transcription. Gene expression analysis by MT-PCR indicated that ST repressed many genes which were induced by EGF (EGFR, CAV1, CTGF, CYR61, CD44, S100A4) and induced genes which alter the actin cytoskeleton (NLF1, NLF2, EPHB4). Examination of the public database of breast cancers revealed tumours exhibiting higher Snail1 expression have an increased risk of disease-recurrence. This was not seen for Snail2, and Zeb1/δEF1 showed a reverse correlation with lower expression values being predictive of increased risk. Conclusion ST in combination with EGF directed a greater EMT via actin depolymerisation and focal contact size reduction, resulting in a loosening of cell-ECM attachment along with Snail1-Zeb1/δEF1 induction. This appeared fundamentally different to the EGF-induced EMT, highlighting the multiple pathways which can regulate EMT. Our findings add support for a functional role for Snail1 in invasive breast cancer.
Resumo:
The cotton strip assay (CSA) is an established technique for measuring soil microbial activity. The technique involves burying cotton strips and measuring their tensile strength after a certain time. This gives a measure of the rotting rate, R, of the cotton strips. R is then a measure of soil microbial activity. This paper examines properties of the technique and indicates how the assay can be optimised. Humidity conditioning of the cotton strips before measuring their tensile strength reduced the within and between day variance and enabled the distribution of the tensile strength measurements to approximate normality. The test data came from a three-way factorial experiment (two soils, two temperatures, three moisture levels). The cotton strips were buried in the soil for intervals of time ranging up to 6 weeks. This enabled the rate of loss of cotton tensile strength with time to be studied under a range of conditions. An inverse cubic model accounted for greater than 90% of the total variation within each treatment combination. This offers support for summarising the decomposition process by a single parameter R. The approximate variance of the decomposition rate was estimated from a function incorporating the variance of tensile strength and the differential of the function for the rate of decomposition, R, with respect to tensile strength. This variance function has a minimum when the measured strength is approximately 2/3 that of the original strength. The estimates of R are almost unbiased and relatively robust against the cotton strips being left in the soil for more or less than the optimal time. We conclude that the rotting rate X should be measured using the inverse cubic equation, and that the cotton strips should be left in the soil until their strength has been reduced to about 2/3.
Resumo:
This article elucidates and analyzes the fundamental underlying structure of the renormalization group (RG) approach as it applies to the solution of any differential equation involving multiple scales. The amplitude equation derived through the elimination of secular terms arising from a naive perturbation expansion of the solution to these equations by the RG approach is reduced to an algebraic equation which is expressed in terms of the Thiele semi-invariants or cumulants of the eliminant sequence { Zi } i=1 . Its use is illustrated through the solution of both linear and nonlinear perturbation problems and certain results from the literature are recovered as special cases. The fundamental structure that emerges from the application of the RG approach is not the amplitude equation but the aforementioned algebraic equation. © 2008 The American Physical Society.
Resumo:
Sub-oxide-to-metallic highly-crystalline nanowires with uniformly distributed nanopores in the 3 nm range have been synthesized by a unique combination of the plasma oxidation, re-deposition and electron-beam reduction. Electron beam exposure-controlled oxide → sub-oxide → metal transition is explained using a non-equilibrium model.
Resumo:
Using density functional theory, we have investigated the catalytic properties of bimetallic complex catalysts PtlAum(CO)n (l + m = 2, n = 1–3) in the reduction of SO2 by CO. Due to the strong coupling between the C-2p and metal 5d orbitals, pre-adsorption of CO molecules on the PtlAum is found to be very effective in not only reducing the activation energy, but also preventing poisoning by sulfur. As result of the coupling, the metal 5d band is broadened and down-shifted, and charge is transferred from the CO molecules to the PtlAum. As SO2 is adsorbed on the catalyst, partial charge moves to the anti-σ bonding orbitals between S and O in SO2, weakening the S–O bond strength. This effect is enhanced by pre-adsorbing up to three CO molecules, therefore the S–O bonds become vulnerable. Our results revealed the mechanism of the excellent catalytic properties of the bimetallic complex catalysts.
Resumo:
The catalytic activities, to the reduction of SO2 by CO, of clusters PtlAum (l + m = 2) with or without preadsorbing CO molecules are investigated using first-principles density functional theory. We find that the PtAu(CO)n (n = 1–3) clusters show more excellent catalytic properties than either pure metallic catalysts. Preadsorption of CO to the catalysts could effectively avoid platinum-based catalyst sulfur poisoning; as more CO molecules preadsorbed to the catalysts, the energy barriers for the carbonyl sulfide (COS) molecule’s desorption from the catalyst are remarkably decreased. We propose an ideal catalytic cycle to simultaneously get rid of SO2 and CO over the catalysts PtAu(CO)3.
Resumo:
We present new evidence for sector collapses of the South Soufrière Hills (SSH) edifice, Montserrat during the mid-Pleistocene. High-resolution geophysical data provide evidence for sector collapse, producing an approximately 1 km3 submarine collapse deposit to the south of SSH. Sedimentological and geochemical analyses of submarine deposits sampled by sediment cores suggest that they were formed by large multi-stage flank failures of the subaerial SSH edifice into the sea. This work identifies two distinct geochemical suites within the SSH succession on the basis of trace-element and Pb-isotope compositions. Volcaniclastic turbidites in the cores preserve these chemically heterogeneous rock suites. However, the subaerial chemostratigraphy is reversed within the submarine sediment cores. Sedimentological analysis suggests that the edifice failures produced high-concentration turbidites and that the collapses occurred in multiple stages, with an interval of at least 2 ka between the first and second failure. Detailed field and petrographical observations, coupled with SEM image analysis, shows that the SSH volcanic products preserve a complex record of magmatic activity. This activity consisted of episodic explosive eruptions of andesitic pumice, probably triggered by mafic magmatic pulses and followed by eruptions of poorly vesiculated basaltic scoria, and basaltic lava flows.
Resumo:
Organisations are constantly seeking new ways to improve operational efficiencies. This research study investigates a novel way to identify potential efficiency gains in business operations by observing how they are carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how they can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A genetic algorithm-based approach is proposed to explore and assess alternative process execution scenarios, where the objective function is represented by a comprehensive cost structure that captures different process dimensions. Experiments conducted with different variants of the genetic algorithm evaluate the approach's feasibility. The findings demonstrate that a genetic algorithm-based approach is able to make use of cost reduction as a way to identify improved execution scenarios in terms of reduced case durations and increased resource utilisation. The ultimate aim is to utilise cost-related insights gained from such improved scenarios to put forward recommendations for reducing process-related cost within organisations.
Resumo:
This study reports the synthesis, characterization and application of nano zero-valent iron (nZVI). The nZVI was produced by a reduction method and compared with commercial available ZVI powder for Pb2+ removal from aqueous phase. Comparing with commercial ZVI, the laboratory made nZVI powder has a much higher specific surface area. XRD patterns have revealed zero valent iron phases in two ZVI materials. Different morphologies have been observed using SEM and TEM techniques. EDX spectrums revealed even distribution of Pb on surface after reaction. The XPS analysis has confirmed that immobilized lead was present in its zero-valent and bivalent forms. ‘Core-shell’ structure of prepared ZVI was revealed based on combination of XRD and XPS characterizations. In addition, comparing with Fluka ZVI, this lab made nZVI has much higher reactivity towards Pb2+ and within just 15 mins 99.9% removal can be reached. This synthesized nano ZVI material has shown great potential for heavy metal immobilization from waste water.
Resumo:
The primary motivation for the vehicle replacement schemes that were implemented in many countries was to encourage the purchase of new cars. The basic assumption of these schemes was that these acquisitions would benefit both the economy and the environment as older and less fuel-efficient cars were scrapped and replaced with more fuel-efficient models. In this article, we present a new environmental impact assessment method for assessing the effectiveness of scrappage schemes for reducing CO2 emissions taking into account the rebound effect, driving behavior for older versus new cars and entire lifecycle emissions for during the manufacturing processes of new cars. The assessment of the Japanese scrappage scheme shows that CO2 emissions would only decrease if users of the scheme retained their new gasoline passenger vehicles for at least 4.7 years. When vehicle replacements were restricted to hybrid cars, the reduction in CO2 achieved by the scheme would be 6-8.5 times higher than the emissions resulting from a scheme involving standard, gasoline passenger vehicles. Cost-benefit analysis, based on the emission reduction potential, showed that the scheme was very costly. Sensitivity analysis showed that the Japanese government failed to determine the optimum, or target, car age for scrapping old cars in the scheme. Specifically, scrapping cars aged 13 years and over did not maximize the environmental benefits of the scheme. Consequently, modifying this policy to include a reduction in new car subsidies, focused funding for fuel-efficient cars, and modifying the target car age, would increase environmental benefits. © 2013 Elsevier Ltd.
Resumo:
Japan's fishery harvest peaked in the late 1980s. To limit the race for fish, each fisherman could be provided with specific catch limits in the form of individual transferable quotas (ITQs). The market for ITQs would also help remove the most inefficient fishers. In this article we estimate the potential cost reduction associated with catch limits, and find that about 300 billion yen or about 3 billion dollars could be saved through the allocation and trading of individual-specific catch shares.