27 resultados para sense-and-avoid
Resumo:
For high-technology entrepreneurs, attaining an appropriate level of investment to support new ventures is challenging as substantial investment is usually required prior to revenue generation. Consequently, entrepreneurs must present their firms as investment ready in the context of an uncertain market response and an absence of any trading history. Gaining tenancy within a business incubator can be advantageous to this process given that placement enhances entrepreneurial contact with potential investors whilst professional client advisors (CAs) use their expertise to assist in the development of a credible business plan. However, for the investment proposal to be successful, it must make sense to fund managers despite their lack of technological expertise and product knowledge. Thus, this article explores how incubator CAs and entrepreneurs act in concert to mould innovative ideas into plausible business plans that make sense to venture fund investors. To illustrate this process, we draw upon empirical evidence which suggests that CAs act as sense makers between venture fund managers (VFMs) and high-technology entrepreneurs, yet their role and influence appears undervalued. These findings have implications for entrepreneurial access to much needed funding and also for the identification of investment opportunities for VFMs. © 2011 Taylor & Francis.
Resumo:
During the last 30 years governments almost everywhere in the world are furthering a global neoliberal agenda by withdrawing the state from the delivery of services, decreasing social spending and lowering corporate taxation etc. This restructuring has led to a massive transfer of wealth from the welfare state and working class people into capital. In order to legitimize this restructuring conservative governments engage in collective blaming towards their denizens. This presentation will examine some of the well circulated phrases that have been used by the dominant elite in some countries during the last year to legitimize the imposition of austerity measures. Phrases such as, ‘We all partied’ used by the Irish finance minister, Brian Lenihan, to explain the Irish crisis and collectively blame all Irish people, ‘We must all share the pain’, deployed by another Irish Minister Gilmore and the UK coalition administration’s sound bite ‘We are all in this together’, legitimize the imposition of austerity measures. Utilizing the Gramscian concept of common sense (Gramsci, 1971), I call these phrases ‘austerity common sense’. They are austerity common sense because they both reflect and legitimate the austerity agenda. By deploying these phrases, the ruling economic and political elite seek to influence the perception of the people and pre-empt any intention of resistance. The dominant theme of these phrases is that there is no alternative and that austerity measures are somehow self-inflicted and, as such, should not be challenged because we are all to blame. The purpose of this presentation is to explore the “austerity common sense” theme from a Gramscian approach, focus on its implications for the social work profession and discuss the ways to resist the imposition of the global neoliberal agenda.
Resumo:
The decision of the U.S. Supreme Court in 1991 in Feist Publications, Inc. v. Rural Tel. Service Co. affirmed originality as a constitutional requirement for copyright. Originality has a specific sense and is constituted by a minimal degree of creativity and independent creation. The not original is the more developed concept within the decision. It includes the absence of a minimal degree of creativity as a major constituent. Different levels of absence of creativity also are distinguished, from the extreme absence of creativity to insufficient creativity. There is a gestalt effect of analogy between the delineation of the not original and the concept of computability. More specific correlations can be found within the extreme absence of creativity. "[S]o mechanical" in the decision can be correlated with an automatic mechanical procedure and clauses with a historical resonance with understandings of computability as what would naturally be regarded as computable. The routine within the extreme absence of creativity can be regarded as the product of a computational process. The concern of this article is with rigorously establishing an understanding of the extreme absence of creativity, primarily through the correlations with aspects of computability. The understanding established is consistent with the other elements of the not original. It also revealed as testable under real-world conditions. The possibilities for understanding insufficient creativity, a minimal degree of creativity, and originality, from the understanding developed of the extreme absence of creativity, are indicated.
Resumo:
In attempting to understand the distributions of both introduced species and the native species on which they impact, there is a growing trend to integrate studies of behaviour with more traditional life history/ecological approaches. The question of what mechanisms drive the displacement of the freshwater amphipod Gammarus duebeni by the often introduced G pulex is presented as a case study Patterns of displacement are well documented throughout Europe, but the speed and direction of displacement between these species can be varied. From early studies proposing interspecific competition as causal in these patterns, I review research progress to date. I show there has been no evidence for interspecific competition operating, other than the field patterns themselves, a somewhat tautological argument. Rather, the increased recognition of behavioural attributes with respect to the cannibalistic and predatory nature of these species gave rise to a series of studies unravelling the processes driving field patterns. Both species engage in 'intraguild predation' (IGP), with moulting females particularly vulnerable to predation by congeneric males. G pulex is more able both to engage in and avoid this interaction with G duebeni. However, several factors mediate the strength and asymmetry of this IGP, some biotic (e.g. parasitism) and others abiotic (e.g. water chemistry). Further, a number of alternative hypotheses that may account for the displacement (hybridization; parasite transmission) have been tested and rejected. While interspecific competition has been modelled mathematically and found to be a weak interaction relative to IGP, mechanisms of competition between these Gammarus species remain largely untested empirically. Since IGP may be finely balanced in some circumstances, I conclude that the challenge to detect interspecific competition remains and we require assessment of its role, if any, in the interaction between these species. Appreciation of behavioural attributes and their mediation should allow us to more fully understand, and perhaps predict, species introductions and resultant distributions.
Resumo:
Introduction: Optimal management of mechanical ventilation and weaning requires dynamic and collaborative decision making to minimize complications and avoid delays in the transition to extubation. In the absence of collaboration, ventilation decision making may be fragmented, inconsistent, and delayed. Our objective was to describe the professional group with responsibility for key ventilation and weaning decisions and to examine organizational characteristics associated with nurse involvement.
Methods: A multi-center, cross-sectional, self-administered survey was sent to nurse managers of adult intensive care units (ICUs) in Denmark, Germany, Greece, Italy, Norway, Switzerland, Netherlands and United Kingdom (UK). We summarized data as proportions (95% confidence intervals (CIs)) and calculated odds ratios (OR) to examine ICU organizational variables associated with collaborative decision making.
Results: Response rates ranged from 39% (UK) to 92% (Switzerland), providing surveys from 586 ICUs. Interprofessional collaboration (nurses and physicians) was the most common approach to initial selection of ventilator settings (63% (95% CI 59 to 66)), determination of extubation readiness (71% (67 to 75)), weaning method (73% (69 to 76)), recognition of weaning failure (84% (81 to 87)) and weaning readiness (85% (82 to 87)), and titration of ventilator settings (88% (86 to 91)). A nurse-to-patient ratio other than 1:1 was associated with decreased interprofessional collaboration during titration of ventilator settings (OR 0.2, 95% CI 0.1 to 0.6), weaning method (0.4 (0.2 to 0.9)), determination of extubation readiness (0.5 (0.2 to 0.9)) and weaning failure (0.4 (0.1 to 1.0)). Use of a weaning protocol was associated with increased collaborative decision making for determining weaning (1.8 (1.0 to 3.3)) and extubation readiness (1.9 (1.2 to 3.0)), and weaning method (1.8 (1.1 to 3.0)). Country of ICU location influenced the profile of responsibility for all decisions. Automated weaning modes were used in 55% of ICUs.
Conclusions: Collaborative decision making for ventilation and weaning was employed in most ICUs in all countries although this was influenced by nurse-to-patient ratio, presence of a protocol, and varied across countries. Potential clinical implications of a lack of collaboration include delayed adaptation of ventilation to changing physiological parameters, and delayed recognition of weaning and extubation readiness resulting in unnecessary prolongation of ventilation.
Resumo:
This is a paper about resistance and affordance as they relate to music-making in the most extended sense, and perhaps about empathy if this is understood as a capacity to ‘read’ the resistances and affordances of objects, bodies, people and environments. It proceeds from a set of broad working assumptions which inform one individual’s musical practice, via a description a musical-instrument making project which is a hybrid of physical and virtual elements and is designed to test those assumptions, to a speculative finale in which it is suggested that musicking might, in some circumstances, be regarded in itself as a form of resistance. It moves from the intimate and personal, through what might be regarded as local concerns to more global observation, prefiguring the structure of the performance system it describes: the Virtual-Physical Feedback flute
Resumo:
Political parties have only recently become a subject of investigation in political theory. In this paper I analyse religious political parties in the context of John Rawls’s political liberalism. Rawlsian political liberalism, I argue, overly constrains the scope of democratic political contestation and especially for the kind of contestation channelled by parties. This restriction imposed upon political contestation risks undermining democracy and the development of the kind of democratic ethos that political liberalism cherishes. In this paper I therefore aim to provide a broader and more inclusive understanding of ‘reasonable’ political contestation, able to accommodate those parties (including religious ones) that political liberalism, as customarily understood, would exclude from the democratic realm. More specifically, I first embrace Muirhead and Rosenblum’s (Perspectives on Politics 4: 99–108 2006) idea that parties are ‘bilingual’ links between state and civil society and I draw its normative implications for party politics. Subsequently, I assess whether Rawls’s political liberalism is sufficiently inclusive to allow the presence of parties conveying religious and other comprehensive values. Due to Rawls’s thick conceptions of reasonableness and public reason, I argue, political liberalism risks seriously limiting the number and kinds of comprehensive values which may be channelled by political parties into the public political realm, and this may render it particularly inhospitable to religious political parties. Nevertheless, I claim, Rawls’s theory does offer some scope for reinterpreting the concepts of reasonableness and public reason in a thinner and less restrictive sense and this may render it more inclusive towards religious partisanship.
Resumo:
Background: Natural Killer Cells (NK) play an important role in detection and elimination of virus-infected, damaged or cancer cells. NK cell function is guided by expression of Killer Immunoglobulin-like Receptors (KIRs) and contributed to by the cytokine milieu. KIR molecules are grouped on NK cells into stimulatory and inhibitory KIR haplotypes A and B, through which NKs sense and tolerate HLA self-antigens or up-regulate the NK-cytotoxic response to cells with altered HLA self-antigens, damaged by viruses or tumours. We have previously described increased numbers of NK and NK-related subsets in association with sIL-2R cytokine serum levels in BELFAST octo/nonagenarians. We hypothesised that changes in KIR A and B haplotype gene frequencies could explain the increased cytokine profiles and NK compartments previously described in Belfast Elderly Longitudinal Free-living Aging STudy (BELFAST) octo/nonagenarians, who show evidence of ageing well.
Results: In the BELFAST study, 24% of octo/nonagenarians carried the KIR A haplotype and 76% KIR B haplotype with no differences for KIR A haplogroup frequency between male or female subjects (23% v 24%; p=0.88) or for KIR B haplogroup (77% v 76%; p=0.99). Octo/nonagenarian KIR A haplotype carriers showed increased NK numbers and percentage compared to Group B KIR subjects (p=0.003; p=0.016 respectively). There were no KIR A/ B haplogroup-associated changes for related CD57+CD8 (high or low) subsets. Using logistic regression, KIR B carriers were predicted to have higher IL-12 cytokine levels compared to KIR A carriers by about 3% (OR 1.03, confidence limits CI 0.99–1.09; p=0.027) and 14% higher levels for TGF-ß (active), a cytokine with an anti-inflammatory role, (OR 1.14, confidence limits CI 0.99–1.09; p=0.002).
Conclusion: In this observational study, BELFAST octo/nonagenarians carrying KIR A haplotype showed higher NK cell numbers and percentage compared to KIR B carriers. Conversely, KIR B haplotype carriers, with genes encoding for activating KIRs, showed a tendency for higher serum pro-inflammatory cytokines compared to KIR A carriers. While the findings in this study should be considered exploratory they may serve to stimulate debate about the immune signatures of those who appear to age slowly and who represent a model for good quality survivor-hood.© 2013 Rea et al.; licensee BioMed Central Ltd.
Resumo:
Background There has been an explosion in research into possible associations between periodontitis and various systemic diseases and conditions. Aim To review the evidence for associations between periodontitis and various systemic diseases and conditions, including chronic obstructive pulmonary disease (COPD), pneumonia, chronic kidney disease, rheumatoid arthritis, cognitive impairment, obesity, metabolic syndrome and cancer, and to document headline discussions of the state of each field. Periodontal associations with diabetes, cardiovascular disease and adverse pregnancy outcomes were not discussed by working group 4. Results Working group 4 recognized that the studies performed to date were largely cross-sectional or case-control with few prospective cohort studies and no randomized clinical trials. The best current evidence suggests that periodontitis is characterized by both infection and pro-inflammatory events, which variously manifest within the systemic diseases and disorders discussed. Diseases with at least minimal evidence of an association with periodontitis include COPD, pneumonia, chronic kidney disease, rheumatoid arthritis, cognitive impairment, obesity, metabolic syndrome and cancer. The working group agreed that there is insufficient evidence to date to infer causal relationships with the exception that organisms originating in the oral microbiome can cause lung infections. Conclusions The group was unanimous in their opinion that the reported associations do not imply causality, and establishment of causality will require new studies that fulfil the Bradford Hill or equivalent criteria. Precise and community-agreed case definitions of periodontal disease states must be implemented systematically to enable consistent and clearer interpretations of studies of the relationship to systemic diseases. The members of the working group were unanimous in their opinion that to develop data that best inform clinicians, investigators and the public, studies should focus on robust disease outcomes and avoid surrogate endpoints. It was concluded that because of the relative immaturity of the body of evidence for each of the purported relationships, the field is wide open and the gaps in knowledge are large. © 2013 European Federation of Periodontology and American Academy of Periodontology.
Resumo:
The pharmacogenomics field is crucial for optimizing the selection of which chemotherapy regimen to use according to the patient's genomic profile. Indeed, the individual's inherited genome accounts for a large proportion of the variation in his or her response to chemotherapeutic agents both in terms of efficiency and toxicity. Patients with metastatic disease are more likely to receive different lines of chemotherapy with variable efficacy and experience some related complications. It is therefore critical to tailor the best therapeutic arsenal to improve the efficacy and avoid as much as possible related complications that are susceptible to interrupt the treatment. The pharmacogenomics approach investigates for each drug the implicated metabolic pathway and the potential personal variations in gene function. The aim of this review is to present a clear overview of the most accurate polymorphisms that have been identified as related to drug response in patients with mCRC.
Resumo:
In this study, the authors propose simple methods to evaluate the achievable rates and outage probability of a cognitive radio (CR) link that takes into account the imperfectness of spectrum sensing. In the considered system, the CR transmitter and receiver correlatively sense and dynamically exploit the spectrum pool via dynamic frequency hopping. Under imperfect spectrum sensing, false-alarm and miss-detection occur which cause impulsive interference emerged from collisions due to the simultaneous spectrum access of primary and cognitive users. That makes it very challenging to evaluate the achievable rates. By first examining the static link where the channel is assumed to be constant over time, they show that the achievable rate using a Gaussian input can be calculated accurately through a simple series representation. In the second part of this study, they extend the calculation of the achievable rate to wireless fading environments. To take into account the effect of fading, they introduce a piece-wise linear curve fitting-based method to approximate the instantaneous achievable rate curve as a combination of linear segments. It is then demonstrated that the ergodic achievable rate in fast fading and the outage probability in slow fading can be calculated to achieve any given accuracy level.
Resumo:
This study investigates topology optimization of energy absorbing structures in which material damage is accounted for in the optimization process. The optimization objective is to design the lightest structures that are able to absorb the required mechanical energy. A structural continuity constraint check is introduced that is able to detect when no feasible load path remains in the finite element model, usually as a result of large scale fracture. This assures that designs do not fail when loaded under the conditions prescribed in the design requirements. This continuity constraint check is automated and requires no intervention from the analyst once the optimization process is initiated. Consequently, the optimization algorithm proceeds towards evolving an energy absorbing structure with the minimum structural mass that is not susceptible to global structural failure. A method is also introduced to determine when the optimization process should halt. The method identifies when the optimization method has plateaued and is no longer likely to provide improved designs if continued for further iterations. This provides the designer with a rational method to determine the necessary time to run the optimization and avoid wasting computational resources on unnecessary iterations. A case study is presented to demonstrate the use of this method.