Foxpro cursor size

This seems like such an easy problem, but I can't find a solution anywhere. My colleague and I are working on an application that uses Foxpro xml dump objects. It works great, but we want to split the table into several files based on size restrictions.

It seems like this should be the easy part: how do you find the cursor size in Foxpro?

+4
source share
3 answers

If you mean the file size, you can find the file to which the cursor belongs by calling the DBF () function with the cursor as an alias, checking that the return value extension is .dbf, and then use the file functions to read the file size. The cursor may be in memory, though (the reported "file name" will have the extension .tmp, if I remember correctly), so an alternative would be to use RECCOUNT () (to get the number of lines) in combination with AFIELDS () (to get the size of each lines) to get closer to file size. (In-memory cursors can sometimes be forced to write to disk, including the NOFILTER clause in query generation)

+1
source

RECSIZE () will return the length of a single string in bytes - this means that RECCOUNT () will give you the size. All the elements that have already been discussed are accurate.

Regarding memo fields, if you need to know how big THEY is, you can add a new integer column to your table structure for "MemoLength". Then

replace all memoLength with len (alltrim (YourMemoField))

You can then take MemoLength to define your breakdown groups, given the size of this column, and the rest of the RECSIZE () * rows you want to extract from it.

Alternatively, you can run a query based on the primary key column of the table, which you can use as a reference, and do something like ...

select YourPrimaryKey, len (alltrim (YourMemoField)) as MemoLength from Tags to SomeHoldingCursor readwrite cursor

.. OR, select

to table MemSizeTable

Create an index in MemSizeTable and you can use the join for more information. Thus, it will not distort your original record size and will not violate the original structure of the table, but with the relation you can still extract the necessary elements.

+2
source

Here it is fully functional, based on an example cursor and dummy entries ... The critical function is the DumpXML () routine and needs the alias of the file to be reset, the size of the file you want to close it on (in the size of "k") and the prefix the name of the file you want to flush XML. It will automatically generate an ex sequence: MyXMLOutput1.xml, MyXMLOutput2.xml, MyXMLOutput3.xml, etc., For many cases when necessary. Took me about 15 minutes.

CREATE CURSOR SomeTest ; ( SomeField1 c(10),; AnotherField i,; SomeNumber N(8,2),; MemoFld m,; SomeDateTime t; ) INSERT INTO SomeTest VALUES ( "testchar10", 9403, 12345.78, "some memo value string", DATETIME() ) DumpXML( ALIAS(), 300, "MyXML" ) FUNCTION dumpXML LPARAMETERS cAliasName, nSizeLimit, cNameOfXMLOutput IF NOT USED( cAliasName ) RETURN "" ENDIF */ Assume size limit in "k" nSizeLimit = nSizeLimit * 1024 SELECT ( cAliasName ) */ Get a copy of the structure without disrupting original USE IN SELECT( "MySample" ) && pre-close in case left open from prior cycle SELECT * ; FROM ( cAliasName ) ; WHERE RECNO() = 1; INTO CURSOR MySample READWRITE SELECT MySample */ Populate each field with maximum capacities... typically */ critical for your char based fields AFIELDS( aActualStru ) cMemoFields = "" lHasMemoFields = .f. FOR I = 1 TO FCOUNT() cFieldName = FIELD(I) DO CASE CASE aActualStru[i,2] = "C" replace &cFieldName WITH REPLICATE( "X", aActualStru[i,3] ) CASE aActualStru[i,2] = "L" replace &cFieldName WITH .T. CASE aActualStru[i,2] = "D" replace &cFieldName WITH DATE() CASE aActualStru[i,2] = "T" replace &cFieldName WITH DATETIME() CASE aActualStru[i,2] = "M" */ Default memo as a single character to ensure */ closing field name </endoffield> included in XML replace &cFieldName WITH "X" */ if a MEMO field, add this element to a string */ to be macro'd to detect its size... Each record */ can contain MORE than one memo field... */ Ex: + LEN( ALLTRIM( MemoFld )) lHasMemoFields = .T. cMemoFields = cMemoFields + " + len( ALLTRIM( " + cFieldName + " ))" CASE aActualStru[i,2] = "I" */ Integer, force to just 10 1's replace &cFieldName WITH 1111111111 CASE aActualStru[i,2] = "N" */ Actual numeric and not an integer, double or float */ Allow for full length plus decimal positions NumValue = VAL( REPLICATE( "9", aActualStru[i,3] - aActualStru[i,4] - 1 ); + "." + REPLICATE( "9", aActualStru[i,4] )) replace &cFieldName WITH NumValue ENDCASE ENDFOR */ Strip leading " + " from the string in case multiple fields IF lHasMemoFields cMemoFields = SUBSTR( cMemoFields, 3 ) ENDIF cXML = "" LOCAL oXML as XMLAdapter oXML = CREATEOBJECT( "XMLAdapter" ) oXML.AddTableSchema( "MySample" ) oXML.ToXML( "cXML", "", .f. ) */ Now, determine the size of the per record at its full length -- less memo nSizeOfPerRecord = LEN( STREXTRACT( cXML, "<MySample>", "</MySample>", 1, 4 )) */ and the rest of the header per XML dump nSizeOfSchema = LEN( cXML ) - nSizeOfPerRecord */ Now, back to the production alias to be split SELECT( cAliasName ) nNewSize = 0 nXMLCycle = 0 */ if we just started, or finished writing another block */ and need to generate a new group of XML dump reset size nNewSize = nSizeOfSchema */ Always blank out the temp cursor for each batch... SELECT MySample ZAP SELECT ( cAliasName ) SCAN IF lHasMemoFields nAllMemoSizes = &cMemoFields ELSE nAllMemoSizes = 0 ENDIF IF nNewSize + nSizeOfPerRecord + nAllMemoSizes > nSizeLimit */ The upcoming record will have exceeded capacity, finish XML */ with all records up to this point nXMLCycle = nXMLCycle + 1 cNewFile = FULLPATH( cNameOfXMLOutput + ALLTRIM( STR( nXMLCycle )) + ".XML" ) oXML = CREATEOBJECT( "XMLAdapter" ) oXML.AddTableSchema( "MySample" ) */ Generate the XML cycle of these qualified records... oXML.ToXML( cNewFile, "", .t. ) */ restart for next pass of data nNewSize = nSizeOfSchema */ Always blank out the temp cursor for each batch... SELECT MySample ZAP ENDIF */ Add record to total size... nNewSize = nNewSize + nSizeOfPerRecord + nAllMemoSizes */ we have a record to be included in segment dump... */ scatter from the original table and gather into the temp SCATTER MEMO NAME oFromOriginal SELECT MySample APPEND BLANK GATHER MEMO NAME oFromOriginal */ back to original table driving the XML Dump process SELECT ( cAliasName ) ENDSCAN */ if the "MyTable" has records not yet flushed from limit, write that too IF RECCOUNT( "MySample" ) > 0 */ The upcoming record will have exceeded capacity, finish XML */ with all records up to this point nXMLCycle = nXMLCycle + 1 cNewFile = FULLPATH( cNameOfXMLOutput + ALLTRIM( STR( nXMLCycle )) + ".XML" ) oXML = CREATEOBJECT( "XMLAdapter" ) oXML.AddTableSchema( "MySample" ) */ Generate the XML cycle of these qualified records... oXML.ToXML( cNewFile, "", .t. ) ENDIF */ Done with the "MySample" for cursor to XML analysis... USE IN SELECT( "MySample" ) ENDFUNC 
+1
source

Source: https://habr.com/ru/post/1299115/


All Articles