971 resultados para Harder


Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the development of oil and gas field exploration, it becomes harder to search new reserves. So a higher demand of seismic exploration comes up. Now 3C3D seismic exploration technology has been applied in petroleum exploration domains abroad. Comparing with the traditional P-wave exploration, the seismic attributes information which provided by 3C3D seismic exploration will increase quickly. And it can derive various combined parameters. The precision of information about lithology, porosity, fracture, oil-bearing properties, etc which estimated by above parameters was higher than that of pure P-wave exploration. These advantages mentioned above lead to fast development of 3C3D seismic technology recently. Therefore, how to apply the technology in petroleum exploration field in China, how to obtain high quality seismic data, and how to process and interpret real data, become frontier topics in geophysical field nowadays, which have important practical significance in research and application. In this paper, according to the propagation properties of P-wave and converted wave, a study of 3C3D acquisition parameters design method was completed. Main parameters included: trace interval, shot interval, maximum offset, bin size, the interval of receiving lines, the interval of shooting lines, migration aperture, maximum cross line distance, etc. Their determination principle was given. The type of 3C3D seismic exploration geometry was studied. By calculating bin attributes and analyzing parameters of geometry, some useful conclusions were drawn. With the method in this paper, real geometries for continental lithology stratum gas reservoir and fractured gas reservoir were studied and determined. In the static method of multi-wave, the near surface P-wave, S-wave parameter investigation method has been advanced, and this method has been applied for the patent successfully; the near surface P-wave, S-wave parameter investigation method and the converted refraction wave first arrival static techniques have been integrally used to improve the effectiveness of converted wave static. In the aspect of converted wave procession, the rotation of horizontal component data, the calculation of converted wave common conversion bin, the residual static of converted wave, the velocity analysis of the common conversion point (CCP), the Kirchhoff pre-stack time migration of converted wave techniques have been applied for setting up the various 3C3D seismic data processing flows based on different geologic targets, and the high quality P-wave, converted-wave profiles have been acquired in the actual data processing. In the aspect of P-wave and converted-wave comprehensive interpretation, the thoughts and methods of using zero-offset S-wave VSP data to calibrate horizon have been proposed; the method of using P-wave and S-wave amplitude ratio to predict the areas of oil and gas enrichment has been studied; the method of inversion using P-wave combined with S-wave has been studied; the various P-wave, S-wave parameters(velocity ratio, amplitude ratio, poisson ratio) have been used to predict the depth, physical properties, gas-bearing properties of reservoirs; the method of predicting the continental stratum lithology gas reservoir has been built. The above techniques have all been used in various 3D3C seismic exploration projects in China, and the better effects have been gotten. By using these techniques, the 3C3D seismic exploration level has been improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PILOT is a programming system constructed in LISP. It is designed to facilitate the development of programs by easing the familiar sequence: write some code, run the program, make some changes, write some more code, run the program again, etc. As a program becomes more complex, making these changes becomes harder and harder because the implications of changes are harder to anticipate. In the PILOT system, the computer plays an active role in this evolutionary process by providing the means whereby changes can be effected immediately, and in ways that seem natural to the user. The user of PILOT feels that he is giving advice, or making suggestions, to the computer about the operation of his programs, and that the system then performs the work necessary. The PILOT system is thus an interface between the user and his program, monitoring both in the requests of the user and operation of his program. The user may easily modify the PILOT system itself by giving it advice about its own operation. This allows him to develop his own language and to shift gradually onto PILOT the burden of performing routine but increasingly complicated tasks. In this way, he can concentrate on the conceptual difficulties in the original problem, rather than on the niggling tasks of editing, rewriting, or adding to his programs. Two detailed examples are presented. PILOT is a first step toward computer systems that will help man to formulate problems in the same way they now help him to solve them. Experience with it supports the claim that such "symbiotic systems" allow the programmer to attack and solve more difficult problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time, finding nearest neighbors accurately and efficiently can be challenging, especially when the database contains a large number of objects, and when the underlying distance measure is computationally expensive. This thesis proposes new methods for improving the efficiency and accuracy of nearest neighbor retrieval and classification in spaces with computationally expensive distance measures. The proposed methods are domain-independent, and can be applied in arbitrary spaces, including non-Euclidean and non-metric spaces. In this thesis particular emphasis is given to computer vision applications related to object and shape recognition, where expensive non-Euclidean distance measures are often needed to achieve high accuracy. The first contribution of this thesis is the BoostMap algorithm for embedding arbitrary spaces into a vector space with a computationally efficient distance measure. Using this approach, an approximate set of nearest neighbors can be retrieved efficiently - often orders of magnitude faster than retrieval using the exact distance measure in the original space. The BoostMap algorithm has two key distinguishing features with respect to existing embedding methods. First, embedding construction explicitly maximizes the amount of nearest neighbor information preserved by the embedding. Second, embedding construction is treated as a machine learning problem, in contrast to existing methods that are based on geometric considerations. The second contribution is a method for constructing query-sensitive distance measures for the purposes of nearest neighbor retrieval and classification. In high-dimensional spaces, query-sensitive distance measures allow for automatic selection of the dimensions that are the most informative for each specific query object. It is shown theoretically and experimentally that query-sensitivity increases the modeling power of embeddings, allowing embeddings to capture a larger amount of the nearest neighbor structure of the original space. The third contribution is a method for speeding up nearest neighbor classification by combining multiple embedding-based nearest neighbor classifiers in a cascade. In a cascade, computationally efficient classifiers are used to quickly classify easy cases, and classifiers that are more computationally expensive and also more accurate are only applied to objects that are harder to classify. An interesting property of the proposed cascade method is that, under certain conditions, classification time actually decreases as the size of the database increases, a behavior that is in stark contrast to the behavior of typical nearest neighbor classification systems. The proposed methods are evaluated experimentally in several different applications: hand shape recognition, off-line character recognition, online character recognition, and efficient retrieval of time series. In all datasets, the proposed methods lead to significant improvements in accuracy and efficiency compared to existing state-of-the-art methods. In some datasets, the general-purpose methods introduced in this thesis even outperform domain-specific methods that have been custom-designed for such datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transition Year (TY) has been a feature of the Irish Education landscape for 39 years. Work experience (WE) has become a key component of TY. WE is defined as a module of between five and fifteen days duration where students engage in a work placement in the broader community. It places a major emphasis on building relationships between schools and their external communities and concomitantly between students and their potential future employers. Yet, the idea that participation in a TY work experience programme could facilitate an increased awareness of potential careers has drawn little attention from the research community. This research examines the influence WE has on the subsequent subjects choices made by students along with the effects of that experience on the students’ identities and emerging vocational identities. Socio-cultural Learning Theory and Occupational Choice Theory frame the overall study. A mixed methods approach to data collection was adopted through the administration of 323 quantitative questionnaires and 32 individual semi-structured interviews in three secondary schools. The analysis of the data was conducted using a grounded theory approach. The findings from the research show that WE makes a significant contribution to the students’ sense of agency in their own lives. It facilitates the otherwise complex process of subject choice, motivates students to work harder in their senior cycle, introduces them to the concepts of active, experience-based and self-directed learning, while boosting their self-confidence and nurturing the emergence of their personal and vocational identities. This research is a gateway to further study in this field. It also has wide reaching implications for students, teachers, school authorities, parents and policy makers regarding teaching and learning in our schools and the value of learning beyond the walls of the classroom.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of fortification of skim milk powder and sodium caseinate on Cheddar cheeses was investigated. SMP fortification led to decreased moisture, increased yield, higher numbers of NSLAB and reduced proteolysis. The functional and texture properties were also affected by SMP addition and formed a harder, less meltable cheese than the control. NaCn fortification led to increased moisture, increased yield, decreased proteolysis and higher numbers of NSLAB. The functional and textural properties were affected by fortification with NaCn and formed a softer cheese that had similar or less melt than the control. Reducing the lactose:casein ratio of Mozzarella cheese by using ultrafiltration led to higher pH, lower insoluble calcium, lower lactose, galactose and lactic acid levels in the cheese. The texture and functional properties of the cheese was affected by varying the lactose:casein ratio and formed a harder cheese that had similar melt to the control later in ripening. The flavour and bake properties were also affected by decreased lactose:casein ratio; the cheeses had lower acid flavour and blister colour than the control cheese. Varying the ratio of αs1:β-casein in Cheddar cheese affected the texture and functionality of the cheese but did not affect insoluble calcium, proteolysis or pH. Increasing the ratio of αs1:β-casein led to cheese with lower meltability and higher hardness without adverse effects on flavour. Using camel chymosin in Mozzarella cheese instead of calf chymosin resulted in cheese with lower proteolysis, higher softening point, higher hardness and lower blister quantity. The texture and functional properties that determine the shelf life of Mozzarella were maintained for a longer ripening period than when using calf chymosin therefore increasing the window of functionality of Mozzarella. In summary, the results of the trials in this thesis show means of altering the texture, functional, rheology and sensory properties of Mozzarella and Cheddar cheeses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Agency problems within the firm are a significant hindrance to efficiency. We propose trust between coworkers as a superior alternative to the standard tools used to mitigate agency problems: increased monitoring and incentive-based pay. We model trust as mutual, reciprocal altruism between pairs of coworkers and show how it induces employees to work harder, relative to those at firms that use the standard tools. In addition, we show that employees at trusting firms have higher job satisfaction, and that these firms enjoy lower labor cost and higher profits. We conclude by discussing how trust may also be easier to use within the firm than the standard agency-mitigation tools. © 2002 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Consent forms have lengthened over time and become harder for participants to understand. We sought to demonstrate the feasibility of creating a simplified consent form for biobanking that comprises the minimum information necessary to meet ethical and regulatory requirements. We then gathered preliminary data concerning its content from hypothetical biobank participants. METHODOLOGY/PRINCIPAL FINDINGS: We followed basic principles of plain-language writing and incorporated into a 2-page form (not including the signature page) those elements of information required by federal regulations and recommended by best practice guidelines for biobanking. We then recruited diabetes patients from community-based practices and randomized half (n = 56) to read the 2-page form, first on paper and then a second time on a tablet computer. Participants were encouraged to use "More information" buttons on the electronic version whenever they had questions or desired further information. These buttons led to a series of "Frequently Asked Questions" (FAQs) that contained additional detailed information. Participants were asked to identify specific sentences in the FAQs they thought would be important if they were considering taking part in a biorepository. On average, participants identified 7 FAQ sentences as important (mean 6.6, SD 14.7, range: 0-71). No one sentence was highlighted by a majority of participants; further, 34 (60.7%) participants did not highlight any FAQ sentences. CONCLUSIONS: Our preliminary findings suggest that our 2-page form contains the information that most prospective participants identify as important. Combining simplified forms with supplemental material for those participants who desire more information could help minimize consent form length and complexity, allowing the most substantively material information to be better highlighted and enabling potential participants to read the form and ask questions more effectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While numerous studies find that deep-saline sandstone aquifers in the United States could store many decades worth of the nation's current annual CO 2 emissions, the likely cost of this storage (i.e. the cost of storage only and not capture and transport costs) has been harder to constrain. We use publicly available data of key reservoir properties to produce geo-referenced rasters of estimated storage capacity and cost for regions within 15 deep-saline sandstone aquifers in the United States. The rasters reveal the reservoir quality of these aquifers to be so variable that the cost estimates for storage span three orders of magnitude and average>$100/tonne CO 2. However, when the cost and corresponding capacity estimates in the rasters are assembled into a marginal abatement cost curve (MACC), we find that ~75% of the estimated storage capacity could be available for<$2/tonne. Furthermore, ~80% of the total estimated storage capacity in the rasters is concentrated within just two of the aquifers-the Frio Formation along the Texas Gulf Coast, and the Mt. Simon Formation in the Michigan Basin, which together make up only ~20% of the areas analyzed. While our assessment is not comprehensive, the results suggest there should be an abundance of low-cost storage for CO 2 in deep-saline aquifers, but a majority of this storage is likely to be concentrated within specific regions of a smaller number of these aquifers. © 2011 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Institute of Community Studies was set up by Michael Young in order to carry out research on politically relevant social issues, in a context free from direct political control. A research method was devised for it whereby researchers made their own values and objectives very explicit, while staying as close as possible in their reports to the concerns and language of respondents themselves. This method has often been criticized by professional sociologists: but it reflects quite well the nature of social knowledge. It has produced reports which help to increase public understanding of social processes, and provide useful guidance to policy makers. Professional sociology on the other hand has tried to develop a rigorously value-free method. As a result, though, it often seems to be tied implicitly to values shared among researchers but not more universally. Arguably this makes it harder for the general public to understand, and accept, its findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project was commissioned to generate an improved understanding of the sensitivities of seagrass habitats to pressures associated with human activities in the marine environment - to provide an evidence base to facilitate and support management advice for Marine Protected Areas; development of UK marine monitoring and assessment, and conservation advice to offshore marine industries. Seagrass bed habitats are identified as a Priority Marine Feature (PMF) under the Marine (Scotland) Act 2010, they are also included on the OSPAR list of threatened and declining species and habitats, and are a Habitat of Principle Importance (HPI) under the Natural Environment and Rural Communities (NERC) Act 2006, in England and Wales. The purpose of this project was to produce sensitivity assessments with supporting evidence for the HPI, OSPAR and PMF seagrass/Zostera bed habitat definitions, clearly documenting the evidence behind the assessments and any differences between assessments. Nineteen pressures, falling in five categories - biological, hydrological, physical damage, physical loss, and pollution and other chemical changes - were assessed in this report. Assessments were based on the three British seagrasses Zostera marina, Z. noltei and Ruppia maritima. Z. marina var. angustifolia was considered to be a subspecies of Z. marina but it was specified where studies had considered it as a species in its own rights. Where possible other components of the community were investigated but the basis of the assessment focused on seagrass species. To develop each sensitivity assessment, the resistance and resilience of the key elements were assessed against the pressure benchmark using the available evidence. The benchmarks were designed to provide a ‘standard’ level of pressure against which to assess sensitivity. Overall, seagrass beds were highly sensitive to a number of human activities: • penetration or disturbance of the substratum below the surface; • habitat structure changes – removal of substratum; • physical change to another sediment type; • physical loss of habitat; • siltation rate changes including and smothering; and • changes in suspended solids. High sensitivity was recorded for pressures which directly impacted the factors that limit seagrass growth and health such as light availability. Physical pressures that caused mechanical modification of the sediment, and hence damage to roots and leaves, also resulted in high sensitivity. Seagrass beds were assessed as ‘not sensitive’ to microbial pathogens or ‘removal of target species’. These assessments were based on the benchmarks used. Z. marina is known to be sensitive to Labyrinthula zosterae but this was not included in the benchmark used. Similarly, ‘removal of target species’ addresses only the biological effects of removal and not the physical effects of the process used. For example, seagrass beds are probably not sensitive to the removal of scallops found within the bed but are highly sensitive to the effects of dredging for scallops, as assessed under the pressure penetration or disturbance of the substratum below the surface‘. This is also an example of a synergistic effect Assessing the sensitivity of seagrass bed biotopes to pressures associated with marine activities between pressures. Where possible, synergistic effects were highlighted but synergistic and cumulative effects are outside the scope off this study. The report found that no distinct differences in sensitivity exist between the HPI, PMF and OSPAR definitions. Individual biotopes do however have different sensitivities to pressures. These differences were determined by the species affected, the position of the habitat on the shore and the sediment type. For instance evidence showed that beds growing in soft and muddy sand were more vulnerable to physical damage than beds on harder, more compact substratum. Temporal effects can also influence the sensitivity of seagrass beds. On a seasonal time frame, physical damage to roots and leaves occurring in the reproductive season (summer months) will have a greater impact than damage in winter. On a daily basis, the tidal regime could accentuate or attenuate the effects of pressures depending on high and low tide. A variety of factors must therefore be taken into account in order to assess the sensitivity of a particular seagrass habitat at any location. No clear difference in resilience was established across the three seagrass definitions assessed in this report. The resilience of seagrass beds and the ability to recover from human induced pressures is a combination of the environmental conditions of the site, growth rates of the seagrass, the frequency and the intensity of the disturbance. This highlights the importance of considering the species affected as well as the ecology of the seagrass bed, the environmental conditions and the types and nature of activities giving rise to the pressure and the effects of that pressure. For example, pressures that result in sediment modification (e.g. pitting or erosion), sediment change or removal, prolong recovery. Therefore, the resilience of each biotope and habitat definitions is discussed for each pressure. Using a clearly documented, evidence based approach to create sensitivity assessments allows the assessment and any subsequent decision making or management plans to be readily communicated, transparent and justifiable. The assessments can be replicated and updated where new evidence becomes available ensuring the longevity of the sensitivity assessment tool. The evidence review has reduced the uncertainty around assessments previously undertaken in the MB0102 project (Tillin et al 2010) by assigning a single sensitivity score to the pressures as opposed to a range. Finally, as seagrass habitats may also contribute to ecosystem function and the delivery of ecosystem services, understanding the sensitivity of these biotopes may also support assessment and management in regard to these. Whatever objective measures are applied to data to assess sensitivity, the final sensitivity assessment is indicative. The evidence, the benchmarks, the confidence in the assessments and the limitations of the process, require a sense-check by experienced marine ecologists before the outcome is used in management decisions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pre-fight displays typically provide honest, but sometimes dishonest, information about resource holding potential and may be influenced by assessment of resource value and hence motivation to acquire the resource. These assessments of potential costs and benefits are also predicted to influence escalated fight behaviour. This is examined in shell exchange contests of hermit crabs in which we establish an information asymmetry about a particularly poor quality shell. The poor shell was created by gluing sand to the interior whereas control shells lacked sand and the low value of the poor shell could not be accurately assessed by the opponent. Crabs in the poor shell showed changes in the use of pre-fight displays, apparently to increase the chances of swapping shells. When the fights escalated, crabs in poor shells fought harder if they took the role of attacker but gave up quickly if in the defender role. These tactics appear to be adaptive but do not result in a major shift in the roles taken or outcome. We thus link resource assessment with pre-fight displays, the roles taken, tactics used during escalation and the outcome of these contests.