Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICES AND METHODS FOR PROCESSING NETWORK NODES
Document Type and Number:
WIPO Patent Application WO/2015/021845
Kind Code:
A1
Abstract:
Devices and methods are provided for processing network nodes. For example, a sorting request is detected; one or more first network nodes to be sorted are acquired corresponding to the sorting request; a first characteristic matrix is constructed based on at least information associated with the first network nodes; association information between the first network nodes is acquired; a sparse matrix is constructed based on at least information associated with the association information between the first network nodes; iterative multiplication is performed on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and the first network nodes are sorted based on at least information associated with the second characteristic matrix.

Inventors:
FENG XIAOWEI (CN)
REN JIAOJIAO (CN)
XIONG YAN (CN)
Application Number:
PCT/CN2014/082363
Publication Date:
February 19, 2015
Filing Date:
July 17, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TENCENT TECH SHENZHEN CO LTD (CN)
International Classes:
G06F15/173
Foreign References:
CN101657040A2010-02-24
CN101140588A2008-03-12
US8180804B12012-05-15
US20110252121A12011-10-13
Other References:
KANG, U ET AL.: "Big Graph Mining: Algorithms and Discoveries", SIGKDD EXPLORATIONS, vol. 14, no. ISSUE, 31 December 2012 (2012-12-31), pages 29 - 30
Attorney, Agent or Firm:
BEIJING SAN GAO YONG XIN INTELLECTUAL PROPERTY AGENCY CO., LTD. (No.5 Huizhong Road Chaoyang District, Beijing 1, CN)
Download PDF:
Claims:
What is claimed is:

1. A method for processing network nodes, the method comprising: detecting a sorting request;

acquiring one or more first network nodes to be sorted corresponding to the sorting request;

constructing a first characteristic matrix based on at least information associated with the first network nodes;

acquiring association information between the first network nodes;

constructing a sparse matrix based on at least information associated with the association information between the first network nodes;

performing iterative multiplication on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and sorting the first network nodes based on at least information associated with the second characteristic matrix.

2. The method of claim 1, wherein the constructing a sparse matrix based on at least information associated with the association information between the first network nodes includes:

constructing an original matrix based on at least information associated with the first network nodes, wherein first element values of the original matrix correspond to "0";

in response to two second network nodes being associated according to the association information between the first network nodes,

setting second element values in the original matrix corresponding to the second network nodes as "1"; and

updating the original matrix; and

performing probability transition processing on the updated original matrix to generate the sparse matrix.

3. The method of claim 2, wherein the performing probability transition processing on the updated original matrix to generate the sparse matrix includes: in response to the association information between the first network nodes including contact frequency information, performing the probability transition processing on the updated original matrix to generate the sparse matrix based on at least information associated with one or more weights corresponding to the contact frequency information.

4. The method of claim 2, further comprising:

storing one or more first column numbers associated with one or more first columns of the original matrix, where third element values of the first columns correspond to "0";

storing one or more first row numbers and one or more second column numbers associated with fourth element values of the original matrix corresponding to "1"; and

storing fifth element values of the sparse matrix obtained through the probability transition processing on sixth element values of the original matrix corresponding to "1".

5. The method of claim 4, wherein the performing iterative multiplication on the sparse matrix and the first characteristic matrix to obtain a second

characteristic matrix upon convergence includes:

partitioning the sparse matrix into three blocks of data based on at least information associated with the first row numbers, the second column numbers and the first column numbers;

multiplying the three blocks of data by the first characteristic matrix; and adding results of the multiplication to obtain a new characteristic matrix.

6. The method of any of claims 1-5, wherein the characteristic matrix includes an nxl matrix, and the sparse matrix includes an nxn matrix, where n is a number of the first network nodes.

7. A device for processing network nodes, the device comprising:

a characteristic matrix constructing module configured to detect a sorting request, acquire one or more first network nodes to be sorted corresponding to the sorting request, and construct a first characteristic matrix based on at least information associated with the first network nodes; a sparse matrix constructing module configured to acquire association information between the first network nodes and construct a sparse matrix based on at least information associated with the association information between the first network nodes;

a computing module configured to perform iterative multiplication on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and

a sorting module configured to sort the first network nodes based on at least information associated with the second characteristic matrix.

8. The device of claim 7, wherein the sparse matrix constructing module includes:

an original matrix constructing unit configured to construct an original matrix based on at least information associated with the first network nodes, wherein first element values of the original matrix correspond to "0";

a matrix element value setting unit configured to, in response to two second network nodes being associated according to the association information between the first network nodes, set second element values in the original matrix corresponding to the second network nodes as "1" and update the original matrix; and

a matrix transformation unit configured to perform probability transition processing on the updated original matrix to generate the sparse matrix.

9. The device of claim 8, wherein the matrix transformation unit is further configured to, in response to the association information between the first network nodes including contact frequency information, perform the probability transition processing on the updated original matrix to generate the sparse matrix based on at least information associated with one or more weights corresponding to the contact frequency information.

10. The device of claim 8, wherein the sorting device further includes: a storage module configured to:

store one or more first column numbers associated with one or more first columns of the original matrix, where third element values of the first columns correspond to "0"; store one or more first row numbers and one or more second column numbers associated with fourth element values of the original matrix corresponding to "1"; and store fifth element values of the sparse matrix obtained through the probability transition processing on sixth element values of the original matrix corresponding to "1".

11. The device of claim 10, wherein the computing module is further configured to:

partition the sparse matrix into three blocks of data based on at least information associated with the first row numbers, the second column numbers and the first column numbers;

multiply the three blocks of data by the first characteristic matrix; and add results of the multiplication to obtain a new characteristic matrix.

12. The device of any of claims 7-11, wherein the characteristic matrix includes an nxl matrix, and the sparse matrix includes an nxn matrix, where n is a number of the first network nodes.

13. The device of claim 7, further comprising:

one or more data processors; and

a computer-readable storage medium;

wherein one or more of the characteristic matrix constructing module, the sparse matrix constructing module, the computing module, and the sorting module are stored in the storage medium and configured to be executed by the one or more data processors.

14. A non-transitory computer readable storage medium comprising programming instructions for processing network nodes, the programming instructions configured to cause one or more data processors to execute operations comprising:

detecting a sorting request;

acquiring one or more first network nodes to be sorted corresponding to the sorting request; constructing a first characteristic matrix based on at least information associated with the first network nodes;

acquiring association information between the first network nodes;

constructing a sparse matrix based on at least information associated with the association information between the first network nodes;

performing iterative multiplication on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and sorting the first network nodes based on at least information associated with the second characteristic matrix.

Description:
DEVICES AND METHODS FOR PROCESSING

NETWORK NODES

CROSS-REFERENCES TO RELATED APPLICATIONS

[0001] The application claims priority to Chinese Patent Application No.

201310356173.2, filed August 15, 2013, incorporated by reference herein for all purposes.

BACKGROUND OF THE INVENTION

[0002] Certain embodiments of the present invention are directed to computer technology. More particularly, some embodiments of the invention provide systems and methods for network technology. Merely by way of example, some embodiments of the invention have been applied to network nodes. But it would be recognized that the invention has a much broader range of applicability.

[0003] Because of Google, more and more research and applications are performed on PageRank algorithms. At present, PageRank algorithms are widely applied to influence rankings of nodes on a relation network. The present algorithms are implemented using various solutions, such as C/C++ and java, and some algorithms are packaged in business software. However, the conventional technology has some disadvantages. For example, some algorithms cannot implement parallel computing and can only be used for small and medium-sized datasets. Other algorithms are capable of performing parallel computing, but they are difficult to realize and inconvenient to maintain.

[0004] Hence it is highly desirable to improve the techniques for processing network nodes.

BRIEF SUMMARY OF THE INVENTION

[0005] According to one embodiment, a method is provided for processing network nodes. For example, a sorting request is detected; one or more first network nodes to be sorted are acquired corresponding to the sorting request; a first characteristic matrix is constructed based on at least information associated with the first network nodes; association information between the first network nodes is acquired; a sparse matrix is constructed based on at least information associated with the association information between the first network nodes; iterative multiplication is performed on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and the first network nodes are sorted based on at least information associated with the second characteristic matrix.

[0006] According to another embodiment, a device for processing network nodes includes: a characteristic matrix constructing module configured to detect a sorting request, acquire one or more first network nodes to be sorted corresponding to the sorting request, and construct a first characteristic matrix based on at least information associated with the first network nodes; a sparse matrix constructing module configured to acquire association information between the first network nodes and construct a sparse matrix based on at least information associated with the association information between the first network nodes; a computing module configured to perform iterative multiplication on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and a sorting module configured to sort the first network nodes based on at least information associated with the second characteristic matrix.

[0007] According to yet another embodiment, a non-transitory computer readable storage medium includes programming instructions for processing network nodes. The programming instructions are configured to cause one or more data processors to execute certain operations. For example, a sorting request is detected; one or more first network nodes to be sorted are acquired corresponding to the sorting request; a first characteristic matrix is constructed based on at least information associated with the first network nodes; association information between the first network nodes is acquired; a sparse matrix is constructed based on at least information associated with the association information between the first network nodes; iterative multiplication is performed on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and the first network nodes are sorted based on at least information associated with the second characteristic matrix.

[0008] For example, the devices and methods disclosed herein are realized using general-purpose structured query languages (SQL). As an example, the devices and methods disclosed herein are configured to sort network nodes of a small and medium-sized dataset using a relational database. For example, the devices and methods disclosed herein are configured to sort network nodes of a large or ultra-large dataset using hive and hadoop distributed computing platforms.

[0009] Depending upon embodiment, one or more benefits may be achieved. These benefits and various additional objects, features and advantages of the present invention can be fully appreciated with reference to the detailed description and accompanying drawings that follow.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] Figure 1 is a simplified diagram showing a network platform for processing network nodes according to one embodiment of the present invention.

[0011] Figure 2 is a simplified diagram showing a method for processing network nodes according to one embodiment of the present invention.

[0012] Figure 3 is a simplified diagram showing a process for constructing a sparse matrix as part of the method as shown in Figure 2 according to one

embodiment of the present invention.

[0013] Figure 4 is a simplified diagram showing a process for performing probability transition processing to generate a sparse matrix as part of the process as shown in Figure 3 according to one embodiment of the present invention.

[0014] Figure 5(A) and Figure 5(B) are simplified diagrams showing an original matrix constructed based on multiple network nodes according to some embodiments of the present invention.

[0015] Figure 6 is a simplified diagram showing a sparse matrix according to one embodiment of the present invention.

[0016] Figure 7 is a simplified diagram showing a device for processing network nodes according to one embodiment of the present invention.

[0017] Figure 8 is a simplified diagram showing a sparse matrix constructing module as part of the device as shown in Figure 7 according to one embodiment of the present invention.

[0018] Figure 9 is a simplified diagram showing a device for processing network nodes according to another embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTION

[0019] Figure 1 is a simplified diagram showing a network platform for processing network nodes according to one embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.

[0020] According to some embodiments, network nodes are processed (e.g., sorted) using various network platforms, such as a micro-blog platform, a player transaction platform of a network game, etc. As shown in Figure 1, a relation network is established between users via the network platform, and network nodes in the same relation network can be sorted, according to certain embodiments. For example, network nodes of a small and medium-sized dataset can be sorted using a relational database. In another example, network nodes of a large or ultra-large dataset can be sorted using hive and hadoop distributed computing platforms. The network platform corresponds to a micro-blog platform, and relationship among user includes a listening relation and a micro-blog forwarding relation, according to some

embodiments.

[0021] Figure 2 is a simplified diagram showing a method for processing network nodes according to one embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method 100 includes at least processes S 110-S 140.

[0022] According to one embodiment, the process S 110 includes: detecting a sorting request, acquiring one or more first network nodes to be sorted corresponding to the sorting request, and constructing a first characteristic matrix based on at least information associated with the first network nodes. For example, the sorting request includes node types of the first network nodes to be sorted. In another example, the sorting request includes user nodes that access a certain network during a preset time period. As an example, when the sorting request is detected, the user nodes that access the network during the preset time period are counted as the first network nodes to be sorted. As another example, the preset time period corresponds to one or more days or one or more hours. As yet another example, the first network nodes to be sorted are acquired to construct a characteristic matrix B which includes an nx 1 matrix:

where bl 1, b21, ..., bnl correspond to "1/n" and n is a number of the first network nodes.

[0023] According to another embodiment, the process S 120 includes: acquiring association information between the first network nodes and constructing a sparse matrix based on at least information associated with the association information between the first network nodes. For example, association information between the first network nodes is acquired from a background database of the network. As an example, the user nodes of the micro-blog platform include various relations, such as listening relations, listened relations, and relations associated with forwarding micro- blogs and a number of forwarded micro-blogs. Then the sparse matrix is constructed based on the association information between the first network nodes, according to certain embodiments. For example, the sparse matrix includes an nxn matrix, and element values of the sparse matrix include relational values of the first network nodes which contain one relation. As an example, the listening relations or the listened relations of the first network nodes are counted, or the relations associated with forwarding micro-blogs and a number of forwarded micro-blogs are counted.

[0024] According to yet another embodiment, the process S 130 includes:

performing iterative multiplication on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence. For example, the sparse matrix is multiplied by the characteristic matrix to obtain a new characteristic matrix. Then the sparse matrix multiplies the new characteristic matrix until the characteristic matrix is converged to obtain the second characteristic matrix (e.g., the final characteristic matrix). As an example, characteristic matrix convergence is calculated by determining differences between element values in the new

characteristic matrix and element values in a previous characteristic matrix, acquiring a sum of absolute values of the differences and then determining whether the sum of the absolute values of the differences approaches zero. As another example, if the sum of the absolute values of the differences approaches zero, the characteristic matrix has converged. Otherwise, the characteristic matrix has not converged.

[0025] In one embodiment, the process S 140 includes: sorting the first network nodes based on at least information associated with the second characteristic matrix. For example, after the characteristic matrix has converged, the first network nodes are sorted in a descending order based on the element values of the second characteristic matrix (e.g., the final characteristic matrix). As shown in Figure 1, the sparse matrix and the characteristic matrix are constructed based on the first network nodes to be sorted, and iterative multiplication computing is carried out on the sparse matrix and the characteristic matrix until the characteristic matrix converges, according to some embodiments. For example, then the first network nodes are sorted based on the element values of the converged characteristic matrix. The method 100 is realized using general-purpose structured query languages, in some embodiments. For example, the sorting of network nodes of a small and medium- sized dataset may be rapidly implemented using a relational database, while the sorting of network nodes of a large or ultra-large dataset may be easily implemented using hive and hadoop distributed computing platforms.

[0026] Figure 3 is a simplified diagram showing a process for constructing a sparse matrix as part of the method as shown in Figure 2 according to one

embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process S120 includes at least processes S 121-S 123.

[0027] According to one embodiment, the process S 121 includes: constructing an original matrix based on at least information associated with the first network nodes, wherein first element values of the original matrix correspond to "0". For example, based on all network nodes, the original matrix is constructed as below:

> 11 5 12 " · S in

'< 21 § 22 " · § 2n

(1)

» n\ S n2

where all element values in the original matrix are "0". [0028] According to another embodiment, the process S 122 includes: in response to two second network nodes being associated according to the association

information between the first network nodes, setting second element values in the original matrix corresponding to the second network nodes as "1", and updating the original matrix. For example, whether the two second network nodes are associated is determined based on the association information between the first network nodes. As an example, if "listening" and "listened" relations are associated, the element values, corresponding to the second network nodes, in the original matrix are set as "1". As another example, if the "listening" and "listened" relations are not associated, the process S 122 is not carried out. In one embodiment, there are 10 network nodes. If a first network node is in a mutual listening relation with a second network node, a fifth network node and an eighth network node respectively, gl2, g21, gl5, g51, gl8 and g81 are all set as "1". If the first network node listens to a third network node and a sixth network node, g31 and g61 are set as "1".

[0029] According to yet another embodiment, the process S 123 includes:

performing probability transition processing on the updated original matrix to generate the sparse matrix. For example, the probability transition processing is performed on the updated original matrix to generate the sparse matrix as follows:

where a :: = . P represents a classic constant, and Cj

represents a sum of elements in a j column of matrix G

[0030] According to certain embodiments, when gi j = 1, a i} =— + - 1 when gi j = 0 and Cj≠ 0, a.. when gij = 1 and Cj = 0, a u =—

n

[0031] According to some embodiments, if Aj corresponds to a set of column numbers of columns that include non-zero values associated with an 1 th row in the matrix G, D corresponds to a set of column numbers of columns that include C j of 0 in the matrix G. For example, if iterative multiplication is performed on the sparse matrix A and the characteristic matrix B, the following equation is obtained: ba ' (3)

[0032] According to certain embodiments, when iterative multiplication is performed on the matrix A and the matrix B, Equation 3 is partitioned in three blocks, such as ∑¾¾i , -∑b j , and , which are calculated

respectively and are then added. For example, iterative computing is performed by storing the column numbers of the columns having C j of 0, the row numbers and the column numbers of the original matrix G having element values of 1 and element values in the matrix A after probability transition that correspond to element values of 1 in the original matrix G. Through setting of the above sparse matrix, data can be calculated in blocks when iterative multiplication is performed on the sparse matrix A and the characteristic matrix B, and thus the calculation efficiency is improved, according to certain embodiments. For example, during iterative multiplication, not all data is required to be stored, and thus the storage space is saved.

[0033] Figure 4 is a simplified diagram showing a process for performing probability transition processing to generate a sparse matrix as part of the process as shown in Figure 3 according to one embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process S 123 includes at least processes S 1231-S 1233.

[0034] According to one embodiment, during the process S 1231, whether contact frequency information is available in the association information between the first network nodes is determined. For example, if the contact frequency information is available in the association information between the first network nodes, the process S 1232 is executed. Otherwise, the process S 1233 is executed. As an example, the association information of the first network nodes includes relation information which has one state, such as the listening relations and the listened relations. As another example, the association information of the first network nodes includes relation information which has multiple states, such as the times of forwarding micro-blogs in a micro-blog forwarding relation. Therefore, when the probability transition processing is performed on the sparse matrix, the contact frequency information is simultaneously considered, according to certain embodiments.

[0035] According to another embodiment, the process S 1232 includes:

performing the probability transition processing on the updated original matrix to generate the sparse matrix based on at least information associated with one or more weights corresponding to the contact frequency information. For example, if the contact frequency information is available in the association information between the network nodes, the corresponding weight is acquired based on the contact frequency. As an example, the more frequent the network node A forwards micro-blogs of the network node B, the larger the weight of the corresponding element that has a row number of "network node A" and a column number of "network node B". Then the probability transition processing is performed on the updated original matrix based on the acquired weights to generate the sparse matrix, according to certain embodiments.

[0036] According to yet another embodiment, during the process S 1233, the probability transition processing is directly carried out on the updated original matrix to generate the sparse matrix.

[0037] Figure 5(A) and Figure 5(B) are simplified diagrams showing an original matrix constructed based on multiple network nodes according to some embodiments of the present invention. The diagrams are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.

[0038] As shown in Figure 5(A), there are six network nodes A to F, in some embodiments. For example, an arrow represents forwarding a micro-blog. As an example, the arrow from B to A represents that the network node B forwards the micro-blog of the network node A. The value on the arrow represents the number of forwarded micro-blog. The original matrix G is constructed based on the association information of the network nodes, as shown in Figure 5(B), according to some embodiments. For example, the corresponding weights are acquired based on the number of forwarded micro-blogs between the network nodes. Then the probability transition processing is performed on the original matrix G based on the weight to generate the sparse matrix A, according to certain embodiments.

[0039] Figure 6 is a simplified diagram showing a sparse matrix according to one embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.

[0040] As shown in Figure 6, the network node B forwards more micro-blogs of the network node A, according to some embodiments. For example, the

corresponding element value is 1/3 in the obtained sparse matrix A. The weight corresponding to the contact frequency information is also considered when the sparse matrix A is obtained and the contact frequency information is included in the association information between the network nodes, according to certain

embodiments.

[0041] Figure 7 is a simplified diagram showing a device for processing network nodes according to one embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The device 700 includes: a characteristic matrix constructing module 110, a sparse matrix constructing module 120, a computing module 130, and a sorting module 140.

[0042] According to one embodiment, the characteristic matrix constructing module 110 is configured to detect a sorting request, acquire one or more first network nodes to be sorted corresponding to the sorting request, and construct a first characteristic matrix based on at least information associated with the first network nodes. For example, the sparse matrix constructing module 120 is configured to acquire association information between the first network nodes and construct a sparse matrix based on at least information associated with the association

information between the first network nodes. As an example, the computing module 130 is configured to perform iterative multiplication on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence. In another example, the sorting module 140 is configured to sort the first network nodes based on at least information associated with the second characteristic matrix.

[0043] According to another embodiment, the sorting request includes node types of the first network nodes to be sorted. In another example, the sorting request includes user nodes that access a certain network during a preset time period. As an example, when the sorting request is detected, the user nodes that access the network during the preset time period are counted as the first network nodes to be sorted. As another example, the preset time period corresponds to one or more days or one or more hours. As yet another example, the characteristic matrix constructing module 110 acquires the first network nodes to be sorted to construct a characteristic matrix which includes an nxl matrix, where element values of the characteristic matrix correspond to "1/n" and n is a number of the first network nodes.

[0044] According to yet another embodiment, the sparse matrix constructing module 120 acquires association information between the first network nodes from a background database of the network. As an example, the user nodes of the micro- blog platform include various relations, such as listening relations, listened relations, and relations associated with forwarding micro-blogs and a number of forwarded micro-blogs. Then, the sparse matrix constructing module 120 constructs the sparse matrix based on the association information between the first network nodes, according to certain embodiments. For example, the sparse matrix includes an nxn matrix, and element values of the sparse matrix include relational values of the first network nodes which contain one relation.

[0045] In one embodiment, the computing module 130 multiplies the sparse matrix by the characteristic matrix to obtain a new characteristic matrix. Then the computing module 130 multiplies the sparse matrix by the new characteristic matrix until the characteristic matrix is converged to obtain the second characteristic matrix (e.g., the final characteristic matrix). As an example, characteristic matrix convergence is calculated by determining differences between element values in the new characteristic matrix and element values in a previous characteristic matrix, acquiring a sum of absolute values of the differences and then determining whether the sum of the absolute values of the differences approaches zero. As another example, if the sum of the absolute values of the differences approaches zero, the characteristic matrix has converged. Otherwise, the characteristic matrix has not converged. In another embodiment, the sorting module 140 sorts the first network nodes in a descending order based on the element values of the second characteristic matrix.

[0046] In certain embodiments, the sparse matrix and the characteristic matrix are constructed based on the first network nodes to be sorted, and iterative multiplication computing is carried out on the sparse matrix and the characteristic matrix until the characteristic matrix converges, according to some embodiments. For example, then the first network nodes are sorted based on the element values of the converged characteristic matrix. The device 700 is realized using general-purpose structured query languages, in some embodiments. For example, the sorting of network nodes of a small and medium-sized dataset may be rapidly implemented using a relational database, while the sorting of network nodes of a large or ultra-large dataset may be easily implemented using hive and hadoop distributed computing platforms.

[0047] Figure 8 is a simplified diagram showing a sparse matrix constructing module as part of the device as shown in Figure 7 according to one embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.

[0048] According to one embodiment, the sparse matrix constructing module 120 includes: an original matrix constructing unit 121 configured to construct an original matrix based on at least information associated with the first network nodes, wherein first element values of the original matrix correspond to "0"; a matrix element value setting unit 122 configured to, in response to two second network nodes being associated according to the association information between the first network nodes, set second element values in the original matrix corresponding to the second network nodes as "1" and update the original matrix; and a matrix transformation unit 123 configured to perform probability transition processing on the updated original matrix to generate the sparse matrix. [0049] According to another embodiment, the characteristic matrix B constructed by the characteristic matrix constructing module 110 and the sparse matrix A constructed by the sparse matrix constructing module 120 are as follows, respectively:

[0050] According to some embodiments, if Aj corresponds to a set of column numbers of columns that include non-zero values associated with an 1 TH row in the matrix G, D corresponds to a set of column numbers of columns that include C j of 0 in the matrix G. For example, if iterative multiplication is performed on the sparse matrix A and the characteristic matrix B, the following equation is obtained:

[0051] According to certain embodiments, when iterative multiplication is performed on the matrix A and the matrix B, Equation 4 is partitioned in three blocks, such as , which are calculated

respectively and are then added. Through setting of the above sparse matrix, data can be calculated in blocks when iterative multiplication is performed on the sparse matrix A and the characteristic matrix B, and thus the calculation efficiency is improved, according to certain embodiments.

[0052] In one embodiment, the matrix transformation unit 123 is further configured to, in response to the association information between the first network nodes including contact frequency information, perform the probability transition processing on the updated original matrix G to generate the nxn sparse matrix based on at least information associated with one or more weights corresponding to the contact frequency information.

[0053] Figure 9 is a simplified diagram showing a device for processing network nodes according to another embodiment of the present invention. The diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.

[0054] According to one embodiment, the device 700 further includes: a storage module 150 configured to: store one or more first column numbers associated with one or more first columns of the original matrix, where third element values of the first columns correspond to "0"; store one or more first row numbers and one or more second column numbers associated with fourth element values of the original matrix corresponding to "1"; and store fifth element values of the sparse matrix obtained through the probability transition processing on sixth element values of the original matrix corresponding to "1".

[0055] According to certain embodiments, based on Equation 2 for generating the sparse matrix using the original matrix, when gij = 1, a„ =— P + 1 ~ P ;

C j n

l - p

when gi j = 0 and Cj≠ 0, a u = ;

n when gi j = 1 and Cj = 0, a. =— .

n

[0056] According to some embodiments, when iterative multiplication is performed, the computing module 130 is further configured to partition the sparse matrix into three blocks of data based on at least information associated with the first row numbers, the second column numbers and the first column numbers; multiply the three blocks of data by the first characteristic matrix; and add results of the multiplication to obtain a new characteristic matrix. For example, iterative computing is performed by storing the column numbers of the columns having C j of 0, the row numbers and the column numbers of the original matrix G having element values of 1 and element values in the matrix A after probability transition that correspond to element values of 1 in the original matrix G. Not all data is required to be stored, and thus the storage space is saved, according to certain embodiments.

[0057] According to one embodiment, a method is provided for processing network nodes. For example, a sorting request is detected; one or more first network nodes to be sorted are acquired corresponding to the sorting request; a first characteristic matrix is constructed based on at least information associated with the first network nodes; association information between the first network nodes is acquired; a sparse matrix is constructed based on at least information associated with the association information between the first network nodes; iterative multiplication is performed on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and the first network nodes are sorted based on at least information associated with the second characteristic matrix. For example, the method is implemented according to at least Figure 2.

[0058] According to another embodiment, a device for processing network nodes includes: a characteristic matrix constructing module configured to detect a sorting request, acquire one or more first network nodes to be sorted corresponding to the sorting request, and construct a first characteristic matrix based on at least information associated with the first network nodes; a sparse matrix constructing module configured to acquire association information between the first network nodes and construct a sparse matrix based on at least information associated with the association information between the first network nodes; a computing module configured to perform iterative multiplication on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and a sorting module configured to sort the first network nodes based on at least information associated with the second characteristic matrix. For example, the device is implemented according to at least Figure 7 and/or Figure 9.

[0059] According to yet another embodiment, a non-transitory computer readable storage medium includes programming instructions for processing network nodes. The programming instructions are configured to cause one or more data processors to execute certain operations. For example, a sorting request is detected; one or more first network nodes to be sorted are acquired corresponding to the sorting request; a first characteristic matrix is constructed based on at least information associated with the first network nodes; association information between the first network nodes is acquired; a sparse matrix is constructed based on at least information associated with the association information between the first network nodes; iterative multiplication is performed on the sparse matrix and the first characteristic matrix to obtain a second characteristic matrix upon convergence; and the first network nodes are sorted based on at least information associated with the second characteristic matrix. For example, the storage medium is implemented according to at least Figure 2.

[0060] The above only describes several scenarios presented by this invention, and the description is relatively specific and detailed, yet it cannot therefore be understood as limiting the scope of this invention. It should be noted that ordinary technicians in the field may also, without deviating from the invention's conceptual premises, make a number of variations and modifications, which are all within the scope of this invention. As a result, in terms of protection, the patent claims shall prevail.

[0061] For example, some or all components of various embodiments of the present invention each are, individually and/or in combination with at least another component, implemented using one or more software components, one or more hardware components, and/or one or more combinations of software and hardware components. In another example, some or all components of various embodiments of the present invention each are, individually and/or in combination with at least another component, implemented in one or more circuits, such as one or more analog circuits and/or one or more digital circuits. In yet another example, various embodiments and/or examples of the present invention can be combined.

[0062] Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein. Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to perform the methods and systems described herein.

[0063] The systems' and methods' data (e.g., associations, mappings, data input, data output, intermediate data results, final data results, etc.) may be stored and implemented in one or more different types of computer-implemented data stores, such as different types of storage devices and programming constructs (e.g., RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.

[0064] The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions (e.g., software) for use in execution by a processor to perform the methods' operations and implement the systems described herein. The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.

[0065] The computing system can include client devices and servers. A client device and server are generally remote from each other and typically interact through a communication network. The relationship of client device and server arises by virtue of computer programs running on the respective computers and having a client device-server relationship to each other.

[0066] This specification contains many specifics for particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment.

Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations, one or more features from a combination can in some cases be removed from the combination, and a combination may, for example, be directed to a subcombination or variation of a subcombination.

[0067] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

[0068] Although specific embodiments of the present invention have been described, it is understood by those of skill in the art that there are other embodiments that are equivalent to the described embodiments. Accordingly, it is to be understood that the invention is not to be limited by the specific illustrated embodiments, but only by the scope of the appended claims.