When creating datasets in AWS Glue DataBrew from the Amazon S3 data lake, you can now create dynamic datasets to schedule data preparation on new incoming Amazon S3 files or apply transformations on filtered or conditionally chosen files or folders in S3. You can create a dynamic S3 path to choose files based on a time-window or time of last file update, and defining custom parameters to replace string, number, or date-based values in your S3 file path with filter conditions such as begins with, ends with, contains, does not contain, less than, greater than, before, and others. Custom parameter names can be included as columns in your datasets and the revised schema will be used for jobs running on dynamic datasets. With parameterized S3 paths and/or files, users can schedule to apply existing recipes to run on selected dynamic datasets.

Read more


Please enter your comment!
Please enter your name here