980 resultados para Initial values
Resumo:
We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources-this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.
Resumo:
A novel technique to detect and localize periodic movements in video is presented. The distinctive feature of the technique is that it requires neither feature tracking nor object segmentation. Intensity patterns along linear sample paths in space-time are used in estimation of period of object motion in a given sequence of frames. Sample paths are obtained by connecting (in space-time) sample points from regions of high motion magnitude in the first and last frames. Oscillations in intensity values are induced at time instants when an object intersects the sample path. The locations of peaks in intensity are determined by parameters of both cyclic object motion and orientation of the sample path with respect to object motion. The information about peaks is used in a least squares framework to obtain an initial estimate of these parameters. The estimate is further refined using the full intensity profile. The best estimate for the period of cyclic object motion is obtained by looking for consensus among estimates from many sample paths. The proposed technique is evaluated with synthetic videos where ground-truth is known, and with American Sign Language videos where the goal is to detect periodic hand motions.
Resumo:
Long-range dependence has been observed in many recent Internet traffic measurements. In addition, some recent studies have shown that under certain network conditions, TCP itself can produce traffic that exhibits dependence over limited timescales, even in the absence of higher-level variability. In this paper, we use a simple Markovian model to argue that when the loss rate is relatively high, TCP's adaptive congestion control mechanism indeed generates traffic with OFF periods exhibiting power-law shape over several timescales and thus introduces pseudo-long-range dependence into the overall traffic. Moreover, we observe that more variable initial retransmission timeout values for different packets introduces more variable packet inter-arrival times, which increases the burstiness of the overall traffic. We can thus explain why a single TCP connection can produce a time-series that can be misidentified as self-similar using standard tests.
Resumo:
Science Foundation Ireland (CSET - Centre for Science, Engineering and Technology, Grant No. 07/CE/11147)
Resumo:
This thesis examines the relationship between initial loss events and the corporate governance and earnings management behaviour of these firms. This is done using four years of corporate governance information spanning the report of an initial loss for companies listed on the UK Stock Exchange. An industry- and sizematched control sample is used in a difference-in-difference analysis to isolate the impact of the initial loss event during the period. It is reported that, in general, an initial loss motivates an improvement in corporate governance in those loss firms where a relative weakness existed prior to the loss and that these changes mainly occur before the initial loss is announced. Firms with stronger (i.e. better quality) corporate governance have less need to alter it in response to the loss. It is also reported that initial loss firms use positive abnormal accruals in the year before the loss in an attempt to defer/avoid the loss — the weaker corporate governance the more likely is it that loss firms manage earnings in this manner. Abnormal accruals are also found to be predictive of an initial loss and when used as a conditioning variable, the quality of corporate governance is an important mitigating factor in this regard. Once the loss is reported, loss firms unwind these abnormal accruals although no evidence of big-bath behaviour is found. The extent to which these abnormal accruals are subsequently unwound are also found to be a function of both the quality of corporate governance as well as the severity of the initial loss.
Resumo:
Flavour release from food is determined by the binding of flavours to other food ingredients and the partition of flavour molecules among different phases. Food emulsions are used as delivery systems for food flavours, and tailored structuring in emulsions provides novel means to better control flavour release. The current study investigated four structured oil-in-water emulsions with structuring in the oil phase, oil-water interface, and water phase. Oil phase structuring was achieved by the formation of monoglyceride (MG) liquid crystals in the oil droplets (MG structured emulsions). Structured interface was created by the adsorption of a whey protein isolate (WPI)-pectin double layer at the interface (multilayer emulsion). Water phase structured emulsions referred to emulsion filled protein gels (EFP gels), where emulsion droplets were embedded in WPI gel network, and emulsions with maltodextrins (MDs) of different dextrose-equivalent (DE) values. Flavour compounds with different physicochemical properties were added into the emulsions, and flavour release (release rate, headspace concentration and air-emulsion partition coefficient) was described by GC headspace analysis. Emulsion structures, including crystalline structure, particle size, emulsion stability, rheology, texture, and microstructures, were characterized using differential scanning calorimetry and X-ray diffraction, light scattering, multisample analytical centrifuge, rheometry, texture analysis, and confocal laser scanning microscopy, respectively. In MG structured emulsions, MG self-assembled into liquid crystalline structures and stable β-form crystals were formed after 3 days of storage at 25 °C. The inclusion of MG crystals allowed tween 20 stabilized emulsions to present viscoelastic properties, and it made WPI stabilized emulsions more sensitive to the change of pH and NaCl concentrations. Flavour compounds in MG structured emulsions had lower initial headspace concentration and air-emulsion partition coefficients than those in unstructured emulsions. Flavour release can be modulated by changing MG content, oil content and oil type. WPI-pectin multilayer emulsions were stable at pH 5.0, 4.0, and 3.0, but they presented extensive creaming when subjected to salt solutions with NaCl ≥ 150 mM and mixed with artificial salivas. Increase of pH from 5.0 to 7.0 resulted in higher headspace concentration but unchanged release rate, and increase of NaCl concentration led to increased headspace concentration and release rate. The study also showed that salivas could trigger higher release of hydrophobic flavours and lower release of hydrophilic flavours. In EFP gels, increases in protein content and oil content contributed to gels with higher storage modulus and force at breaking. Flavour compounds had significantly reduced release rates and air-emulsion partition coefficients in the gels than the corresponding ungelled emulsions, and the reduction was in line with the increase of protein content. Gels with stronger gel network but lower oil content were prepared, and lower or unaffected release rates of the flavours were observed. In emulsions containing maltodextrins, water was frozen at a much lower temperature, and emulsion stability was greatly improved when subjected to freeze-thawing. Among different MDs, MD DE 6 offered the emulsion the highest stability. Flavours had lower air-emulsion partition coefficients in the emulsions with MDs than those in the emulsion without MD. Moreover, the involvement of MDs in the emulsions allowed most flavours had similar release profiles before and after freeze-thaw treatment. The present study provided information about different structured emulsions as delivery systems for flavour compounds, and on how food structure can be designed to modulate flavour release, which could be helpful in the development of functional foods with improved flavour profile.
Resumo:
This qualitative research expands understanding of how information about a range of Novel Food Technologies (NFTs) is used and assimilated, and the implications of this on the evolution of attitudes and acceptance. This work enhances theoretical and applied understanding of citizens’ evaluative processes around these technologies. The approach applied involved observations of interactive exchanges between citizens and information providers (i.e. food scientists), during which they discussed a specific technology. This flexible, yet structured, approach revealed how individuals construct meaning around information about specific NFTs. A rich dataset of 42 ‘deliberate discourse’ and 42 postdiscourse transcripts was collected. Data analysis encompassed three stages: an initial descriptive account of the complete dataset based on the top-down bottom-up (TDBU) model of attitude formation, followed by inductive and deductive thematic analysis across the selected technology groups. The hybrid thematic analysis undertaken identified a Conceptual Model, which represents a holistic perspective on the influences and associated features directing ‘sense-making’ and ultimate evaluations around the technology clusters. How individuals make sense of these technologies is shaped by: their beliefs, values and personal characteristics; their perceptions of power and control over the application of the technology; and, the assumed relevance of the technology and its applications within different contexts. These influences form the frame for the creation of sense-making around the technologies. Internal negotiations between these influences are evident and evaluations are based on the relative importance of each influence to the individual, which tend to contribute to attitude ambivalence and instability. The findings indicate the processes of forming and changing attitudes towards these technologies are: complex; dependent on characteristics of the individual, technology, application and product; and, impacted by the nature and forms of information provided. Challenges are faced in engaging with the public about these technologies, as levels of knowledge, understanding and interest vary.
Resumo:
This paper presents a design science approach to solving persistent problems in the international shipping eco system by creating the missing common information infrastructures. Specifically, this paper reports on an ongoing dialogue between stakeholders in the shipping industry and information systems researchers engaged in the design and development of a prototype for an innovative IT-artifact called Shipping Information Pipeline which is a kind of “an internet” for shipping information. The instrumental aim is to enable information seamlessly to cross the organizational boundaries and national borders within international shipping which is a rather complex domain. The intellectual objective is to generate and evaluate the efficacy and effectiveness of design principles for inter-organizational information infrastructures in the international shipping domain that can have positive impacts on global trade and local economies.
Resumo:
Strikingly, most literature suggests that market competition will push firms to take creativity/innovation seriously as matter of death or survival. Using the data, we examined creativity methods (Napier and Nilsson, 2008; Napier, 2010) in conjunction with three influential cultural values – namely risk tolerance, relationship, and dependence on resources – to assess how they influence decisions of entrepreneurs.The primary objective of this study focuses on perceived values of entrepreneurship and creativity in business conducted within a turbulent environment. Our initial hypothesis is that a typical entrepreneurial process carries with it “creativity-enabling elements.” In a normal situation, when businesses focus more on optimizing their resources for commercial gains, perceptions about values of entrepreneurial creativity are usually vague. However, in difficult times and harsh competition, the difference between survival and failure may be creativity. This paper also examines many previous findings on both entrepreneurship and creativity and suggests a highly possible “organic growth” of creativity in an entrepreneurial environment and reinforcing value of entrepreneurship when creativity power is present. In other words, we see each idea reinforcing the other. We use data from a survey of sample Vietnamese firms during the chaotic economic year 2012 to learn about the ‘entrepreneurshipcreativity nexus.’ A data set of 137 responses qualified for a statistical examination was obtained from an online survey, which started on February 16 and ended May 24, 2012, sent to local entrepreneurs and corporate managers using social networks. The authors employed categorical data analysis (Agresti, 2002; Azen & Walker, 2011). Statistical analyses confirm that for business operation, the creativity and entrepreneurial spirit could hardly be separate; and, this is not only correct with entrepreneurial firm, but also well established companies. The single most important factor before business startup and during early implementation in Vietnam is what we call “connection/relationship.” However, businesspeople are increasingly aware of the need of creativity/innovation. In fact, we suggest that creativity and entrepreneurial spirit cannot be separated in entrepreneurial firms as well as established companies.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
The objective of this study was to determine if MTND2*LHON4917G (4917G), a specific non-synonymous polymorphism in the mitochondrial genome previously associated with neurodegenerative phenotypes, is associated with increased risk for age-related macular degeneration (AMD). A preliminary study of 393 individuals (293 cases and 100 controls) ascertained at Vanderbilt revealed an increased occurrence of 4917G in cases compared to controls (15.4% vs.9.0%, p = 0.11). Since there was a significant age difference between cases and controls in this initial analysis, we extended the study by selecting Caucasian pairs matched at the exact age at examination. From the 1547 individuals in the Vanderbilt/Duke AMD population association study (including 157 in the preliminary study), we were able to match 560 (280 cases and 280 unaffected) on exact age at examination. This study population was genotyped for 4917G plus specific AMD-associated nuclear genome polymorphisms in CFH, LOC387715 and ApoE. Following adjustment for the listed nuclear genome polymorphisms, 4917G independently predicts the presence of AMD (OR = 2.16, 95%CI 1.20-3.91, p = 0.01). In conclusion, a specific mitochondrial polymorphism previously implicated in other neurodegenerative phenotypes (4917G) appears to convey risk for AMD independent of recently discovered nuclear DNA polymorphisms.
Resumo:
OBJECTIVE: To examine the associations between attention-deficit/hyperactivity disorder (ADHD) symptoms, obesity and hypertension in young adults in a large population-based cohort. DESIGN, SETTING AND PARTICIPANTS: The study population consisted of 15,197 respondents from the National Longitudinal Study of Adolescent Health, a nationally representative sample of adolescents followed from 1995 to 2009 in the United States. Multinomial logistic and logistic models examined the odds of overweight, obesity and hypertension in adulthood in relation to retrospectively reported ADHD symptoms. Latent curve modeling was used to assess the association between symptoms and naturally occurring changes in body mass index (BMI) from adolescence to adulthood. RESULTS: Linear association was identified between the number of inattentive (IN) and hyperactive/impulsive (HI) symptoms and waist circumference, BMI, diastolic blood pressure and systolic blood pressure (all P-values for trend <0.05). Controlling for demographic variables, physical activity, alcohol use, smoking and depressive symptoms, those with three or more HI or IN symptoms had the highest odds of obesity (HI 3+, odds ratio (OR)=1.50, 95% confidence interval (CI) = 1.22-2.83; IN 3+, OR = 1.21, 95% CI = 1.02-1.44) compared with those with no HI or IN symptoms. HI symptoms at the 3+ level were significantly associated with a higher OR of hypertension (HI 3+, OR = 1.24, 95% CI = 1.01-1.51; HI continuous, OR = 1.04, 95% CI = 1.00-1.09), but associations were nonsignificant when models were adjusted for BMI. Latent growth modeling results indicated that compared with those reporting no HI or IN symptoms, those reporting 3 or more symptoms had higher initial levels of BMI during adolescence. Only HI symptoms were associated with change in BMI. CONCLUSION: Self-reported ADHD symptoms were associated with adult BMI and change in BMI from adolescence to adulthood, providing further evidence of a link between ADHD symptoms and obesity.
Resumo:
PURPOSE: To investigate the dosimetric effects of adaptive planning on lung stereotactic body radiation therapy (SBRT). METHODS AND MATERIALS: Forty of 66 consecutive lung SBRT patients were selected for a retrospective adaptive planning study. CBCT images acquired at each fraction were used for treatment planning. Adaptive plans were created using the same planning parameters as the original CT-based plan, with the goal to achieve comparable comformality index (CI). For each patient, 2 cumulative plans, nonadaptive plan (PNON) and adaptive plan (PADP), were generated and compared for the following organs-at-risks (OARs): cord, esophagus, chest wall, and the lungs. Dosimetric comparison was performed between PNON and PADP for all 40 patients. Correlations were evaluated between changes in dosimetric metrics induced by adaptive planning and potential impacting factors, including tumor-to-OAR distances (dT-OAR), initial internal target volume (ITV1), ITV change (ΔITV), and effective ITV diameter change (ΔdITV). RESULTS: 34 (85%) patients showed ITV decrease and 6 (15%) patients showed ITV increase throughout the course of lung SBRT. Percentage ITV change ranged from -59.6% to 13.0%, with a mean (±SD) of -21.0% (±21.4%). On average of all patients, PADP resulted in significantly (P=0 to .045) lower values for all dosimetric metrics. ΔdITV/dT-OAR was found to correlate with changes in dose to 5 cc (ΔD5cc) of esophagus (r=0.61) and dose to 30 cc (ΔD30cc) of chest wall (r=0.81). Stronger correlations between ΔdITV/dT-OAR and ΔD30cc of chest wall were discovered for peripheral (r=0.81) and central (r=0.84) tumors, respectively. CONCLUSIONS: Dosimetric effects of adaptive lung SBRT planning depend upon target volume changes and tumor-to-OAR distances. Adaptive lung SBRT can potentially reduce dose to adjacent OARs if patients present large tumor volume shrinkage during the treatment.
Resumo:
Emergency departments are challenging research settings, where truly informed consent can be difficult to obtain. A deeper understanding of emergency medical patients' opinions about research is needed. We conducted a systematic review and meta-summary of quantitative and qualitative studies on which values, attitudes, or beliefs of emergent medical research participants influence research participation. We included studies of adults that investigated opinions toward emergency medicine research participation. We excluded studies focused on the association between demographics or consent document features and participation and those focused on non-emergency research. In August 2011, we searched the following databases: MEDLINE, EMBASE, Google Scholar, Scirus, PsycINFO, AgeLine and Global Health. Titles, abstracts and then full manuscripts were independently evaluated by two reviewers. Disagreements were resolved by consensus and adjudicated by a third author. Studies were evaluated for bias using standardised scores. We report themes associated with participation or refusal. Our initial search produced over 1800 articles. A total of 44 articles were extracted for full-manuscript analysis, and 14 were retained based on our eligibility criteria. Among factors favouring participation, altruism and personal health benefit had the highest frequency. Mistrust of researchers, feeling like a 'guinea pig' and risk were leading factors favouring refusal. Many studies noted limitations of informed consent processes in emergent conditions. We conclude that highlighting the benefits to the participant and society, mitigating risk and increasing public trust may increase research participation in emergency medical research. New methods for conducting informed consent in such studies are needed.
Resumo:
As a psychological principle, the golden rule represents an ethic of universal empathic concern. It is, surprisingly, present in the sacred texts of virtually all religions, and in philosophical works across eras and continents. Building on the literature demonstrating a positive impact of prosocial behavior on well-being, the present study investigates the psychological function of universal empathic concern in Indian Hindus, Christians, Muslims and Sikhs.
I develop a measure of the centrality of the golden rule-based ethic, within an individual’s understanding of his or her religion, that is applicable to all theistic religions. I then explore the consistency of its relationships with psychological well-being and other variables across religious groups.
Results indicate that this construct, named Moral Concern Religious Focus, can be reliably measured in disparate religious groups, and consistently predicts well-being across them. With measures of Intrinsic, Extrinsic and Quest religious orientations in the model, only Moral Concern and religiosity predict well-being. Moral Concern alone mediates the relationship between religiosity and well-being, and explains more variance in well-being than religiosity alone. The relationship between Moral Concern and well-being is mediated by increased preference for prosocial values, more satisfying interpersonal relationships, and greater meaning in life. In addition, across religious groups Moral Concern is associated with better self-reported physical and mental health, and more compassionate attitudes toward oneself and others.
Two additional types of religious focus are identified: Personal Gain, representing the motive to use religion to improve one’s life, and Relationship with God. Personal Gain is found to predict reduced preference for prosocial values, less meaning in life, and lower quality of relationships. It is associated with greater interference of pain and physical or mental health problems with daily activities, and lower self-compassion. Relationship with God is found to be associated primarily with religious variables and greater meaning in life.
I conclude that individual differences in the centrality of the golden rule and its associated ethic of universal empathic concern may play an important role in explaining the variability in associations between religion, prosocial behavior and well-being noted in the literature.