My problem is: I want to perform large queries using the API 2.0 with a python function that extracts data from one dimension (Posts) (about 60.000 lines or more) and then break it down by another dimension (Tags) quering each of the lines of the first dimension (Posts).
Why is this happening: I’m using the Adobe Analytics API 2.0 to perform some extractions including one particular that is quite "large" for the API. Right now I’ve got a dimension (dim1) of Posts that I need to breakdown by the dimension Tags (dim2). I’m facing here two problems, the first one is that the Dimension Posts(dim1) has over 60.000 rows for each day. So I need to query each of these 60.000 rows with each identifier (item_id) and perform another query.
What I saw reading carefully the documentation is that the maximum of lines in each query is 50.000 rows for the version 1.4 and the 2.0
Is there any chance to extend the number of rows to perform these large queries? Can I skip somehow the 50.000 rows limit?
In the other hand I saw that these kind of breakdown queries are quite slow in the version 2.0 of the API that I’m working with, so in some cases it takes about two hours just to collect 1.000 sub queries? dimension rows. Anyone can tell me if the version 1.4 has better performance in these kind of queries? Is the version 1.4 of the API faster in this kind of operations?
I’m a little afraid that Adobe is going to cut the service for the 1.4 API this year in April, I found only this thread but not sure about it?
I’m checking regulary the git source of the porject but for the moment I haven’t seen a solution here.
I’ve checked some links but none of them answer this question:
Source: Python Questions