114 resultados para Customized instructions
Resumo:
Located in the Gulf of Mexico in nearly 8,000 ft of water, the Perdido project is the deepest spar application to date in the world and Shell’s first fully integrated application of its inhouse digital oilfield technology— called “Smart Field”—in the Western hemisphere. Developed by Shell on behalf of partners BP and Chevron, the spar and the subsea equipment connected to it will eventually capture about an order of magnitude more data than is collected from any other Shelldesigned and -managed development operating in the Gulf of Mexico. This article describes Shell’s digital oilfield design philosophy, briefly explains the five design elements that underpin “smartness” in Shell’s North and South American operations and sheds light on the process by which a highly customized digital oilfield development and management plan was put together for Perdido. Although Perdido is the first instance in North and South America in which these design elements and processes were applied in an integrated way, all of Shell’s future new developments in the Western hemisphere are expected to follow the same overarching design principles. Accordingly, this article uses Perdido as a real-world example to outline the high-level details of Shell’s digital oilfield design philosophy and processes.
Resumo:
Located in the Gulf of Mexico in nearly 8,000 feet of water, the Perdido development is the world’s deepest spar and Shell’s first Smart Field in the Western hemisphere. Jointly developed by Shell, BP, and Chevron, the spar and the subsea equipment connected to it will eventually capture approximately an order of magnitude more data than is collected from any other Shell-designed and managed development currently operating in the Gulf of Mexico. This paper will describe Shell’s Smart Fields design philosophy, briefly explain the five design elements that underpin “smartness” in Shell’s North and South American operations—specifically, remote assisted operations, exception-based surveillance, collaborative work environments, hydrocarbon development tools and workflows, and Smart Fields Foundation IT infrastructure—and shed light on the process by which a highly customized Smart Fields development and management plan was put together for Perdido.
Resumo:
Transit passenger market segmentation enables transit operators to target different classes of transit users to provide customized information and services. The Smart Card (SC) data, from Automated Fare Collection system, facilitates the understanding of multiday travel regularity of transit passengers, and can be used to segment them into identifiable classes of similar behaviors and needs. However, the use of SC data for market segmentation has attracted very limited attention in the literature. This paper proposes a novel methodology for mining spatial and temporal travel regularity from each individual passenger’s historical SC transactions and segments them into four segments of transit users. After reconstructing the travel itineraries from historical SC transactions, the paper adopts the Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm to mine travel regularity of each SC user. The travel regularity is then used to segment SC users by an a priori market segmentation approach. The methodology proposed in this paper assists transit operators to understand their passengers and provide them oriented information and services.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Purpose This study aimed to objectively measure the physical activity (PA) characteristics of a racially and ethnically diverse sample of inner-city elementary schoolchildren and to examine the influence of sex, race/ethnicity, grade level, and weight status on PA. Methods A total of 470 students in grades 4-6 from six inner-city schools in Philadelphia wore an ActiGraph GT3X+ accelerometer (Actigraph, Pensacola, FL) for up to 7 d. The resultant data were uploaded to a customized Visual Basic EXCEL macro to determine the time spent in sedentary (SED), light-intensity PA (LPA), and moderate- to vigorous-intensity PA (MVPA). Results On average, students accumulated 48 min of MVPA daily. Expressed as a percentage of monitoring time, students were sedentary for 63% of the time, in LPA 31% of the time, and in MVPA 6% of the time. Across all race/ethnicity and grade level groups, boys exhibited significantly higher levels of MVPA than girls did; fifth-grade boys exhibited significantly lower MVPA levels than fourth-and sixth-grade boys did, and sixth-grade girls exhibited significantly lower MVPA levels than fourth-and fifth-grade girls did. Hispanic children exhibited lower levels of MVPA than children from other racial/ethnic groups did, and overweight and obese children exhibited significantly lower MVPA levels than children in the healthy weight range did. Across the entire sample, only 24.3% met the current public health guidelines for PA. Physical inactivity was significantly greater among females, Hispanics, and overweight and obese students. Conclusions Fewer than one in four inner-city schoolchildren accumulated the recommended 60 min of MVPA daily. These findings highlight the need for effective and sustainable programs to promote PA in inner-city youth.
Resumo:
This Perspective reflects on the withdrawal of the Liverpool Care Pathway in the UK, and its implications for Australia. Integrated care pathways are documents which outline the essential steps of multidisciplinary care in addressing a specific clinical problem. They can be used to introduce best clinical practice, to ensure that the most appropriate management occurs at the most appropriate time and that it is provided by the most appropriate health professional. By providing clear instructions, decision support and a framework for clinician-patient interactions, care pathways guide the systematic provision of best evidence-based care. The Liverpool Care Pathway (LCP) is an example of an integrated care pathway, designed in the 1990s to guide care for people with cancer who are in their last days of life and are expected to die in hospital. This pathway evolved out of a recognised local need to better support non-specialist palliative care providers’ care for patients dying of cancer within their inpatient units. Historically, despite the large number of people in acute care settings whose treatment intent is palliative, dying patients receiving general hospital acute care tended to lack sufficient attention from senior medical staff and nursing staff. The quality of end-of-life care was considered inadequate, therefore much could be learned from the way patients were cared for by palliative care services. The LCP was a strategy developed to improve end-of-life care in cancer patients and was based on the care received by those dying in the palliative care setting.
Resumo:
Purpose To describe the physical activity (PA) levels of children attending after-school programs, 2) examine PA levels in specific after-school sessions and activity contexts, and 3) evaluate after-school PA differences in groups defined by sex and weight status. Methods One hundred forty-seven students in grades 3-6 (mean age: 10.1 +/- 0.7, 54.4% male, 16.5% overweight (OW), 22.8% at-risk for OW) from seven after-school programs in the midwestern United States wore Actigraph GT1M accelerometers for the duration of their attendance to the program. PA was objectively assessed on six occasions during an academic year (three fall and three spring). Stored activity counts were uploaded to a customized data-reduction program to determine minutes of sedentary (SED), light (LPA), moderate (MPA), vigorous (VPA), and moderate-to-vigorous (MVPA) physical activity. Time spent in each intensity category was calculated for the duration of program attendance, as well as specific after-school sessions (e.g., free play, snack time). Results On average, participants exhibited 42.6 min of SED, 40.8 min of LPA, 13.4 min of MPA, and 5.3 min of VPA. The average accumulation of MVPA was 20.3 min. Boys exhibited higher levels of MPA, VPA, and MVPA, and lower levels of SED and LPA, than girls. OW and at-risk-for-OW students exhibited significantly less VPA than nonoverweight students, but similar levels of LPA, MPA, and MVPA. MVPA levels were significantly higher during free-play activity sessions than during organized or structured activity sessions. Conclusion After-school programs seem to be an important contributor to the PA of attending children. Nevertheless, ample room for improvement exists by making better use of existing time devoted to physical activity.
Resumo:
Obesity rates are increasing in children of all ages, and reduced physical activity (PA) is a likely contributor to this trend. Little is known about the physical activity behavior of preschool-age children, or about the influence of preschool attendance on physical activity. Purpose The purpose of this study was to quantify the physical activity levels of children attending a center-based half-day preschool program. Methods Forty-two 3-to-5-year old children (Mean age = 4.0 ± 0.7, 54.8% Male, Mean BMI = 16.5 ± 5.5, Mean BMI %tile = 52.1 ± 33.5) from four class groups (two morning and two afternoon), wore an Actigraph 7164 accelerometer for the entire halfday program (including classroom learning experiences, snack and recess time) 2 times per week, for 10 weeks (20 activity monitoring records in total). Activity counts for each 5-sec interval were uploaded to a customized data reduction program to determine total counts, minutes of moderate PA (MPA) (3–5.9 METs), and minutes of vigorous PA (VPA) (> = 6 METs) per session. Counts were categorized as either MPA or VPA using the cutpoints developed by Sirard and colleagues (2001). Results Across the four 2.5 hour programs, the average MPA, VPA and total counts (× 103) were 12.4 ± 3.1 minutes, 18.3 ± 4.6 minutes, and 171.1 ± 29.7 counts, respectively. Thus, on average, children accumulated just over 12 minutes of moderateto-vigorous PA per hour of program attendance. The PA variables did not differ significantly by gender, weight status, or time of day. There were, however, significant age differences, with 3-year-olds exhibiting significantly less PA than their 4- and 5-year-old counterparts. Conclusions These results suggest that young children are relatively lowactive while attending preschool. Accordingly, interventions to increase movement opportunities during the preschool day are warranted.
Resumo:
It has been proposed that spatial reference frames with which object locations are specified in memory are intrinsic to a to-be-remembered spatial layout (intrinsic reference theory). Although this theory has been supported by accumulating evidence, it has only been collected from paradigms in which the entire spatial layout was simultaneously visible to observers. The present study was designed to examine the generality of the theory by investigating whether the geometric structure of a spatial layout (bilateral symmetry) influences selection of spatial reference frames when object locations are sequentially learned through haptic exploration. In two experiments, participants learned the spatial layout solely by touch and performed judgments of relative direction among objects using their spatial memories. Results indicated that the geometric structure can provide a spatial cue for establishing reference frames as long as it is accentuated by explicit instructions (Experiment 1) or alignment with an egocentric orientation (Experiment 2). These results are entirely consistent with those from previous studies in which spatial information was encoded through simultaneous viewing of all object locations, suggesting that the intrinsic reference theory is not specific to a type of spatial memory acquired by the particular learning method but instead generalizes to spatial memories learned through a variety of encoding conditions. In particular, the present findings suggest that spatial memories that follow the intrinsic reference theory function equivalently regardless of the modality in which spatial information is encoded.
Resumo:
This paper discusses how fundamentals of number theory, such as unique prime factorization and greatest common divisor can be made accessible to secondary school students through spreadsheets. In addition, the three basic multiplicative functions of number theory are defined and illustrated through a spreadsheet environment. Primes are defined simply as those natural numbers with just two divisors. One focus of the paper is to show the ease with which spreadsheets can be used to introduce students to some basics of elementary number theory. Complete instructions are given to build a spreadsheet to enable the user to input a positive integer, either with a slider or manually, and see the prime decomposition. The spreadsheet environment allows students to observe patterns, gain structural insight, form and test conjectures, and solve problems in elementary number theory.
Resumo:
The basic principles and equations are developed for elementary finance, based on the concept of compound interest. The five quantities of interest in such problems are present value, future value, amount of periodic payment, number of periods and the rate of interest per period. We consider three distinct means of computing each of these five quantities in Excel 2007: (i) use of algebraic equations, (ii) by recursive schedule and the Goal Seek facility, and (iii) use of Excel's intrinsic financial functions. The paper is intended to be used as the basis for a lesson plan and contains many examples and solved problems. Comment is made regarding the relative difficulty of each approach, and a prominent theme is the systematic use of more than one method to increase student understanding and build confidence in the answer obtained. Full instructions to build each type of model are given and a complete set of examples and solutions may be downloaded (Examples.xlsx and Solutions.xlsx).
Resumo:
Number theory has in recent decades assumed a great practical importance, due primarily to its application to cryptography. This chapter discusses how elementary concepts of number theory may be illuminated and made accessible to upper secondary school students via appropriate spreadsheet models. In such environments, students can observe patterns, gain structural insight, form and test conjectures, and solve problems. The chapter begins by reviewing literature on the use of spreadsheets in general and the use of spreadsheets in number theory in particular. Two sample applications are then discussed. The first, factoring factorials, is presented and instructions are given to construct a model in Excel 2007. The second application, the RSA cryptosystem, is included because of its importance to Science, Technology, Engineering, and Mathematics (STEM) students. Number theoretic concepts relevant to RSA are discussed, and an outline of RSA. is given, with example. The chapter ends with instructions on how to construct a simple spreadsheet illustrating RSA.
Resumo:
Customized magnetic traps were developed to produce a domain of dense plasmas with a narrow ion beam directed to a particular area of the processed substrate. A planar magnetron coupled with an arc discharge source created the magnetic traps to confine the plasma electrons and generate the ion beam with the controlled ratio of ion-to-neutral fluxes. Images of the plasma jet patterns and numerical vizualizations help explaining the observed phenomena.
Resumo:
Nitrogenated carbon nanotips (NCNTPs) have been synthesized using customized plasma-enhanced hot filament chemical vapor deposition. The morphological, structural, and photoluminescent properties of the NCNTPs are investigated using scanning and transmission electron microscopy, X-ray photoelectron spectroscopy, Raman spectroscopy, and photoluminescence spectroscopy. The photoluminescence measurements show that the NCNTPs predominantly emit a green band at room temperature while strong blue emission is generated at 77 K. It is shown that these very different emission behaviors are related to the change of the optical band-gap and the concentration of the paramagnetic defects of the carbon nanotips. The studies shed light on the controversies on the photoluminescence mechanisms of carbon-based amorphous films measured at different temperatures. The relevance of the results to the use of nitrogenated carbon nanotips in light-emitting optoelectronic devices is discussed.
Resumo:
In 2006, Gaurav Gupta and Josef Pieprzyk presented an attack on the branch-based software watermarking scheme proposed by Ginger Myles and Hongxia Jin in 2005. The software watermarking model is based on replacing jump instructions or unconditional branch statements (UBS) by calls to a fingerprint branch function (FBF) that computes the correct target address of the UBS as a function of the generated fingerprint and integrity check. If the program is tampered with, the fingerprint and/or integrity checks change and the target address is not computed correctly. Gupta and Pieprzyk's attack uses debugger capabilities such as register and address lookup and breakpoints to minimize the requirement to manually inspect the software. Using these resources, the FBF and calls to the same is identified, correct displacement values are generated and calls to FBF are replaced by the original UBS transferring control of the attack to the correct target instruction. In this paper, we propose a watermarking model that provides security against such debugging attacks. Two primary measures taken are shifting the stack pointer modification operation from the FBF to the individual UBSs, and coding the stack pointer modification in the same language as that of the rest of the code rather than assembly language to avoid conspicuous contents. The manual component complexity increases from O(1) in the previous scheme to O(n) in our proposed scheme.