174 resultados para Adjacency Matrix


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vaginal microbicides for the prevention of HIV transmission may be an important option for protecting women from infection. Incorporation of dapivirine, a lead candidate nonnucleoside reverse transcriptase inhibitor, into intravaginal rings (IVRs) for sustained mucosal delivery may increase microbicide product adherence and efficacy compared with conventional vaginal formulations. Twentyfour
healthy HIV-negative women 18–35 years of age were randomly assigned (1:1:1) to dapivirine matrix IVR, dapivirine reservoir IVR, or placebo IVR. Dapivirine concentrations were measured in plasma
and vaginal fluid samples collected at sequential time points over the 33-day study period (28 days of IVR use, 5 days of follow-up). Safety was assessed by pelvic/colposcopic examinations, clinical laboratory tests, and adverse events. Both IVR types were safe and well tolerated with similar adverse events observed in the placebo and dapivirine groups. Dapivirine from both IVR types was successfully distributed throughout the lower genital tract at concentrations over 4 logs greater than the EC50 against wild-type HIV-1 (LAI) in MT4 cells. Maximum concentration (Cmax) and area under the concentration–time curve (AUC) values were significantly higher with the matrix than reservoir IVR. Mean plasma concentrations of dapivirine were ,2 ng/mL. These findings suggest that IVR delivery of microbicides is a viable option meriting further study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.