Ceries vs AWS Lambda Message Queuing

I am currently developing a system for analyzing and visualizing textual data based on NLP.

The backend (Python + Flask + AWS EC2) processes the analysis and uses the API to feed the result back to the frontend application (FLASK + D3 + Heroku), which only processes interactive visualizations.

Currently, prototype analysis is a basic python function, which means that analysis takes longer in large files and thus leads to a request timeout while connecting the API data to the interface. Like many file analysis, it runs in a linear lock queue.

To scale this prototype, I need to change the function Analysis(text)as a background task so that it does not block further execution and cannot make a callback after the function is executed. The input text is extracted from AWS S3, and the output is a relatively large JSON format intended for storage in AWS S3, so the API will simply retrieve this JSON, which contains data for all graphs in the front-end application. (I find S3 a little easier to handle than creating a large relational database structure to store persistent data.)

I am doing simple examples with Celery and consider this a suitable solution, however I just read in AWS Lambda, which on paper seems to be the best solution in terms of scaling ...

The function Analysis(text)uses a pre-built model and functions from the relatively common Python NLP packages. Since my lack of experience in prototype scaling I would like to ask about your experience and judgment on which solution would be most suitable for this scenario.

Thank:)

+4
source share
1 answer

. AWS Lambda, , . , , . Jinja, HTML- . Weasyprint, HTML Pdf. pdf . PDF , .

, , , . , , , . , .

, . ( , , ). . , . . , , . , , , .

, AWS Lambda. , , Pdf . , AWS Lambda . -, , - . - , . , NodeJS, npm, , , Python. Lambda , , ​​ . , , Lambda - ( -). , , Lambdas , , . , .

AWS Lambdas . , , , , .

0

Source: https://habr.com/ru/post/1658420/


All Articles