hermitian.py 文件源码

python
阅读 32 收藏 0 点赞 0 评论 0

项目:factorix 作者: gbouchar 项目源码 文件源码
def sparse_relational_hermitian_scoring(emb, tuples):
    """
    TensorFlow operator that scores triples where relations are defined by a complex vector w

    It is the same a the multilinear function, but uses complex embeddings instead. The complex
    embeddings are of size 2 * R where R is the complex
    dimension. They are encoded such that the first columns correspond to the real part, and the last R correspond to
    the imaginary part. The result of this function is a length-N vector with values:



        S[i] = sum(Re(E[I[i,2],j]) * (Re(E[I[i,1],j]) * Re(E[I[i,3],j]) + Im(E[I[i,1],j]) * Im(E[I[i,3],j]))
                 + Im(E[I[i,2],j]) * (Re(E[I[i,1],j]) * Im(E[I[i,3],j]) - Im(E[I[i,1],j]) * Re(E[I[i,3],j])) )

    Where:
        - I is the tuple tensor of integers with shape (T, 2)
        - E is the N * 2R tensor of complex embeddings (R first columns: real part, the last R columsn: imaginary part)
        - alpha_0 and alpha_1 are the symmetry coefficients

    :param params: tuple (embeddings, symm_coef) containing:
        - embeddings: a real tensor of size [N, 2*R] containing the N rank-R embeddings by row (real part in the rank first R
        columns, imaginary part in the last R columns)
        - the 2-tuple (s0, s1) of symmetry coefficients (or complex-to-real projection coefficients) that are used to
        transform the complex result of the dot product into a real number, as used by most statistical models (e.g.
        mean of a Gaussian or Poisson distributions, natural parameter of a Bernouilli distribution). The conversion
        from complexto real is a simple weighted sum: results = s0 * Re(<e_i, e_j>) + s1 * Im(<e_i, e_j>
    :param tuples: tuple matrix of size [T, 2] containing T pairs of integers corresponding to the indices of the
        embeddings.
    :return: Hermitian dot products of selected embeddings
    >>> embeddings = tf.Variable([[1., 1, 0, 3], [0, 1, 0, 1], [-1, 1, 1, 5], [-3, 1, 0, 2], [-1, 2, -1, -5]])
    >>> idx = tf.Variable([[0, 3, 1], [1, 3, 0], [0, 3, 2], [2, 4, 0], [1, 4, 2], [2, 4, 1]])
    >>> g = sparse_relational_hermitian_scoring(embeddings, idx)
    >>> print(tf_eval(g))
    [  0.   8.  23.  44.  -8.  32.]
    """
    rk = emb.get_shape()[1].value // 2
    emb_re = emb[:, :rk]
    emb_im = emb[:, rk:]
    emb_sel_a_re = tf.gather(emb_re, tuples[:, 0])
    emb_sel_a_im = tf.gather(emb_im, tuples[:, 0])
    emb_sel_b_re = tf.gather(emb_re, tuples[:, 2])
    emb_sel_b_im = tf.gather(emb_im, tuples[:, 2])
    emb_rel_re = tf.gather(emb_re, tuples[:, 1])
    emb_rel_im = tf.gather(emb_im, tuples[:, 1])

    pred_re = tf.mul(emb_sel_a_re, emb_sel_b_re) + tf.mul(emb_sel_a_im, emb_sel_b_im)
    pred_im = tf.mul(emb_sel_a_re, emb_sel_b_im) - tf.mul(emb_sel_a_im, emb_sel_b_re)

    tmp = emb_rel_re * pred_re + emb_rel_im * pred_im

    return tf.reduce_sum(tmp, 1)
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号