Using Python for financial analysis and modeling with CSV data sheets can really make the process easier, and it makes data analysis more efficient and automated.
Run the following command to view the error while processing you dag files .
airflow dags list-import-errors
Make sure you have set the correct path set for DAGs folder
airflow config get-value core dags_folder
the error message is clear that jwt package is not installed. say if you already have it in the requirements.txt
then the issue is the github yamp pipeline. the build step activated virtual env and installed packages into the virtual env. however, your zip step excludes these venv/*. this is the reason the python packages do not exist when you invoke the http trigger.
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: pip install -r requirements.txt
# Optional: Add step to run tests here
- name: Zip artifact for deployment
run: zip -r release.zip function_app.py host.json -x "*.txt venv/*" ".git/*" ".github/* *.md .gitignore local.*"
there are a few ways to deploy, we can use this method to test it out first:
func azure functionapp publish <function app name>
once completed, go to the deployment
tab under function app to see the final status to be sure:
it might take 30 seconds for the function to show up in the overview page
once working, you can refer to this guide to setup github action to deploy via pipeline.
Yep! Even after disabling the bot and removing the Chat API, it might still show up as "Disabled." To fully remove it, try deleting the service account in Google Cloud Console > IAM & Admin > Service Accounts. That should do the trick!
Your JDK version may be not compatible with Scala 2.11.1, please check it.
What I had to do was going to the librairy files (cascadeselect.d.ts) and change this : optionGroupChildren: string | undefined;
to this : optionGroupChildren: any;
. And now it works as expected..
You probably have an /api/user
or /api/me
endpoint. Why not do it there instead of creating a dummy endpoint?
Another option is to use axios
interceptors: https://axios-http.com/docs/interceptors
axios.interceptors.response.use(
(response) => response,
(error) => {
if (error.response.status === 401) {
dispatch(logoutUser()) // probably needs a debounce
}
return Promise.reject(error)
})
I'm not an expert on financial datasets specifically, but when looking for real world data my go-to destinations are usually (apart from Kaggle) government websites :
There's also others that I frequently use:
For me, setting CMAKE_RUNTIME_OUTPUT_DIRECTORY worked. I have no idea why, looks like a bug.
Python3.12 doesn't come with distutils, so you'd have to install it with pip install setuptools
.
https://github.com/PaloAltoNetworks/pan-os-python/issues/529#issuecomment-2394050583
I'd say that this is a bit bizarre. I don't know all the ins and outs of ASAN and I'm not able to pick out what exactly its seeing in what you sent but I would say that if it's a compiled executable done in pure rust, by cargo. there shouldn't be anything to worry about.
Rustc is very aware of modules and crate boundaries (crates compiled to intermediate rlib files instead of native lib, as you mentioned) and mangles all non-externed symbols. Outside a dramatic compiler bug, with the way the compiler works, I don't see how there could be any overlaps or symbol mix-ups if that's what it's detecting.
To solve this problem, all you need to do is to install flutter in other partition than the C:, for example : D:\flutter.
You have to wrap AlertDialog
inside the SizedBox
with your prefered width and height.
return Sizedbox(
width: 200, // give width
height: 200, // give height
child: AlertDialog(
// your code
),
);
brew services start [email protected].
if you have mysql 8.0 version
brew services start mysql
if you don't have any specific version
When I go to my developer account I can't see the Xcode Managed Provisioning Profile but I'm sure that it was created from me by my Xcode the last year.
As you can see from this link so it's correct that you don't see profiles that are managed by Xcode.
I've rebuilt the app but Xcode keep signing it with the same Provisioning Profile with which it signed the app the last year.
The problem is that Xcode does not recreate a new provisioning profile until it has one in a sort of "cache", to know where this cache is, you can drag this icon
into a program like VSCode and it will show the path. (That menu is under your app target -> "Signing & Capabilities")
It will probably be something like: /Users/<your_name>/Library/MobileDevice/Provisioning Profiles
, inside that folder you will see all your provisioning profile.
Now:
Xcode now should regenerate the Provisioning Profiles with a 1 year validity.
TL;DR: close Xcode, go to /Users/<your_name>/Library/MobileDevice/Provisioning Profiles
delete the old Provisioning Profile, reopen Xcode.
I had the same issue where my API worked locally but returned 405 Method Not Allowed after deploying to Vercel.
For me, the problem was Vercel Authentication (https://vercel.com/docs/security/deployment-protection) blocking unauthenticated requests.
Fix: Go to your project in Vercel Dashboard. Navigate to Settings → General. Disable "Vercel Authentication". After disabling this setting, my API started working again!
button.SendKeys(Keys.Enter); worked for me.
I suppose if your Excel opens the data the case isn't with encoding. You can get its's encoding importing 'locale' and asking locale.getpreferredencoding(). Look at header row while opening data in text redactor to find out if any field has escaped characters like '\t' and also look at a csv delimiter (the default in read_csv is ',', your may have another)
On your tomcat directory webapps\manager\META-INF, Remove the section Valve on context.xml files.
<!--Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" /-->
With KobWeb you can also develop web applications.
for me i just change the file name from router.ts to route.ts and then it worked.
r = Redis
r.hset(hash_key, field, value)
r.hexpire(hash_key, ttl, field)
Thank YOU ARASH!! You saved my butt.
The #[\ReturnTypeWillChange] attribute and the error message both belong to PHP 8, so I guess that the server is upgraded.
Wirelo is the go-to source for retailers looking for carrier-approved wireless products. Enjoy a hassle-free buying experience with great prices and trusted quality.Visit us: Best B2B Marketplace for Wireless Vendors
I'll give you several ideas to investigate. Hopefully one of them will lead you to the problem.
Go to each different pod, get what configuration options each one is really using, and compare ALL the configuration files. Maybe they aren't really using the same database:
$ ejabberdctl dump_config /tmp/aaa.yml
$ cat /tmp/aaa.yml
Is there any difference between the node that shows the rooms in get_user_rooms ?
Register an account in the database, then check in the three nodes that they really get that account:
$ ejabberdctl registered_users localhost
admin
An account is registered in the cluster, and the user can login using those credentials in any node of the cluster. When the client logins to that account in a node, the session exists only in that node.
Similarly, the configuration of the rooms is stored in the cluster, and a room can be created in any node, and will be accessible transparently from all the other nodes.
The muc room in fact is alive in one specific node, and the other nodes will just point to that room in that node:
Rooms are distributed at creation time on all available MUC module instances. The multi-user chat module is clustered but the rooms themselves are not clustered nor fault-tolerant: if the node managing a set of rooms goes down, the rooms disappear and they will be recreated on an available node on first connection attempt.
So, maybe the ejabberd nodes connect correctly to the same database, but get_user_rooms doesn't show correct values, or the problem is only in the MUC service?
If this is the wanted result you need to update the content of <mat-option>
as it follows:
<mat-form-field>
<mat-label>Toppings</mat-label>
<mat-select [formControl]="toppings" multiple>
@for (topping of toppingList; track topping) {
<mat-option [value]="topping">
<div class="row">
{{topping}}
<button mat-button>Only</button>
</div>
</mat-option>
}
</mat-select>
</mat-form-field>
<style>
.row {
display: flex;
flex-direction: row;
}
</style>
For more infos on flex you could read this extensive guide and well written:
https://css-tricks.com/snippets/css/a-guide-to-flexbox/
Of course using
is not elegant nor the best way to space stuff, you can go forward and add padding etc...
Lets Debug this step by step:
int i = 0;
→ i is initialized to 0. so i contains the value 0
++i
→ Pre-increment happens, so nows i becomes 1.
i + ++i
→ Substituting values:
i is 0
(original value before pre-increment).
++i makes i = 1
, so now ++i returns 1.
i + ++i = 0 + 1 = 1.
i = 1 (final value).
System.out.println(i); prints 1.
Your Next question Does the increment change the memory address?
Are you using any XSL file for transformation which does the job of excluding the particular file from including? Wix should harvest all the files present inside a directory , It does not harvest a specific file . Your heat command will help to diagnosie .
Alternatively you can write a File copy command in WiX for the particular file to copy from the source to destination. Similar to below :
<Component Id="cmp1F85" Directory="INSTALLFOLDER" Guid="*">
<File Id="filC6A7" KeyPath="yes" Source="SourceDir\xyz.json" />
</Component>
I followed the instructions of SelArom Dot Net Tutorial - Customizing the Model with Regions and Fields and found out that I need to use new region to add fields or specify custom region attribute above custom field.
public class HomePage : Page<HomePage>
{
[Region(Title = "General", Icon = "fas fa-pen")]
public GeneralHomePageRegion General { get; set; }
}
public class GeneralHomePageRegion
{
[Field(Title = "Caption", Description = "Max 50 characters")]
public StringField Title { get; set; }
}
Trying to add a custom font in xcode 16.2 faling to set in custom font from the storyboard and adding the files is working and answers for it will be
Poppins: ["Poppins-Regular", "Poppins-Thin", "Poppins-Light", "Poppins-Medium", "Poppins-SemiBold", "Poppins-Bold", "Poppins-ExtraBold"]
Run into same issue.
Tried to use:
1. Remove aria-hidden attribute by adding next to useEffect block:
const exampleSlideVar = document.querySelectorAll('.slick-slide');
exampleSlideVar.forEach((slide) => {
slide.setAttribute('aria-hidden', 'false');
});
but this way I`ve lost my buttons control;
2. Adding tabindex={-1}
attribute to buttons
Need help too! Thanks!
According to the QtTranslation documentation this is possible as follows:
//% "%1 my translatable suffix!"
QString text = qtTrId("qtn_foo_bar", var);
ul {
list-style-type: none;
padding-left: 10px;
display: table;
}
ul li{
list-style: none;
display: table-row;
}
.product-details-description ul li:before {
font-family: 'FontAwesome';
content: '\f06c';
margin:0 5px 0 10px;
color: #34eb64;
display: table-cell;
text-align: right;
padding-right: .3em;
}
The solution was to set the whole chartData another time after setting the new values for the annotation. I don't know why that is working and why @naren muralis example doesn't but it works now for me.
I found the answer here: https://github.com/microsoft/TypeScript/issues/36444#issuecomment-578572999
I have to convert my class
to type
in order to achieve the solution. Something like this:
type PopulationDto<E> = {
[K in keyof E]: {
path: keyof E;
population?: PopulationDto<
E[K] extends Array<any>
? E[K][number]
: E[K] extends object
? E[K]
: any
>[];
}
}[keyof E];
You can add a prompt that enforces the model to prioritize earlier answers to ensure consistency. For example, you may ask the model to validate if its new answer conflicts with its prior knowledge, and only change the answer if its new input is significantly more reliable.
A possible prompt template like “Are you confident if this new answer is correct based on your knowledge?”.
However, when generating responses, you can adjust the model’s temperature and sampling strategies. A higher temperature often leads to more varied outputs, while a lower temperature results in more deterministic answers. By controlling these parameters, you can increase the model's confidence.
} print("Hello, World!") # This line of code prints the text "Hello, World!" to the console.
From what I checked there is no definitive solution to this, but you can try upgrading pip
or installing python-dotenv
or passing --no-build-isolation
when installing. (Solutions taken from https://github.com/pypa/packaging-problems/issues/721 and https://github.com/numpy/numpy/issues/24377)
Also I think https://github.com/nuncjo/cython-installation-windows might be a better guide for you to go through.
That's awesome GBWDev! Is there any way to make this part a bit more flexible (generic)?
if ($this.text().trim() == "Step three")
Getting(addressing) "Step three" as the last element or last dt child of the dl?
To fix the black background in SwiftUI List, use:
List {
// Your list content here
}
.listStyle(PlainListStyle())
.background(Color.white)
Are you trying to do this:
text = 'example'
result = [
text[i:len(text)-i]
for i in range( (len(text)+1)//2 )
]
print(result)
result:
['example', 'xampl', 'amp', 'm']
and if text = 'example1'
:
['example1', 'xample', 'ampl', 'mp']
Thanks all for your input. I found a solution. I create the SmallScreenForm and then I iterate through the controls in the mainform and do a Find on the controls in the SmallScreenForm being careful to check for child controls. I then copy the width, height, location and font settings from SmallScreenForm to the mainForm controls. Those are the only attributes that were change and hey presto, it works. After all have been set I dispose of the SmallScreenForm.
Probably not the best way to do it, but this is a single application system running my software only. At least now I can support some of the more popular small screens for those who cannot or won't purchase a PC with FULLHD display.
In each directory in the app directory of Expo SDK 52, i now have a file _layout.tsx in each directory with te following content:
import { Stack } from 'expo-router';
import { useEffect } from 'react';
export default function RootLayout() {
const [loaded] = useFonts({
SpaceMono: require('../assets/fonts/Urbanist Regular.ttf'),
//SpaceMono: require('../assets/fonts/SpaceMono-Regular.ttf'),
});
return (
<Stack>
<Stack.Screen name="(tabs)" options={{ headerShown: false }} />
<Stack.Screen name="add_store_product" />
<Stack.Screen name="customer_details" />
<Stack.Screen name="customer" />
<Stack.Screen name="store_product" />
</Stack>
);
}
Those files in the stack are in the same directory of the _layout.tsx file. So I expect the intellisense to list those files for me whenever I use router.push() but they are not listed and besides, when I make changes to file names or even directory(rename a file or directory) inside app directory, the changes to the filenames or directories are not reflected in the intellisense.
What could be the error? Could it be Expo or Npm caching or whatever?
Thank you
I have set FirebaseInAppMessagingAutomaticDataCollectionEnabled to true in my Info.plist and it worked for me
Using only shape is not enough to prevent the title from moving, both shape
and collapsedShape
must be provided.
ExpansionTile(
shape: LinearBorder.none,
collapsedShape: LinearBorder.none,
...
i just update the jdk version to 17
sudo apt update sudo apt install openjdk-17-jdk -y
And verify that 11 jdk verison is installed
java -version
And if you have multiple Java versions, set Java 17 as the default using:
sudo update-alternatives --config java
And restarted jenkins
sudo systemctl restart jenkins
my issue got solved
I was not expecting that I could find here one of the most interesting reads I ever had regarding temporal blocking. Thanks for that. It was a very educational and pleasant read! I could add my thesis here in case of further interest:
https://spiral.imperial.ac.uk/entities/publication/b1f50f8c-7a29-4521-b8eb-e5c2d5949a20
I got this working. The configuration was correct but the openid tab is not the right place to login .Instead in the signin tab an options comes to login via OAuth Server(what ever the name of your OAuth server ).Below 2 properties are not required in order to login via oAuth/OpenID connect.
ENABLE_OPENID_SIGNIN = false ENABLE_OPENID_SIGNUP = false
Offered solutions could be better quality. There are a number of unsolved issues with the given answers:
If you don't know where to look, then searching the entire filesystem isn't as trivial as specifying the root directory on linux. There's some stuff to exclude. What?
Following symlinks can lead to loops which means the search never terminates and never investigates some of the files on the disk.
In most cases, you do not want to search inside virtual directories like /dev/
, /proc/
and /sys/
, because that will spew errors, and not search actual files on disk, instead searching program memory and raw device data, causing the search to take very, very long.
You probably also don't want to search in /tmp/
, which is usually a memory-mounted filesystem that is purged upon reboot and automatically cleaned on modern Linuxes.
The terminal has a limited capacity for text. If this is exceeded, results are lost. Results should be put in a file.
If the terminal connection drops at any point in the search, results are lost and everything has to be restarted. Running in the background would be much preferred.
Searching for code, with all the examples, is still very tricky on the command line. In particular: Escaping stuff. In particular:
Various special characters in bash have to be escaped.
Grep searches for a regex which has to be escaped.
If commands are put into other commands, that leads to more things being escaped.
All three combined when searching code is a literal nightmare to figure out: the user should have an input for what to search for that does not require any escaping.
Filenames can have special characters in them, mucking with your search. The command should be able to deal with evil filenames with quotes and spaces and newlines and other shenanigans in them.
Files could be removed or changed while you're searching, leading to 'File not Found' errors cluttering the output. You could not have permission to things, also cluttering the output. Including an option to suppress errors helps.
Most of the examples use only a single thread, making them unnecessarily dreadfully slow on modern many-core servers, even though the task is embarrasingly parallel. The search command should start one thread per CPU core to keep it busy.
The following should be a big improvement:
# Note: Change search string below here.
nCores=$(nproc --all)
read -r -d '' sSearch <<'EOF'
echo $locale["timezone"]." ".$locale['offset'].PHP_EOL;
EOF
find -print0 \( -type f \) -and \( -not \( -type l \) \) -and \( -not \( -path "./proc/*" -o -path "./sys/*" -o -path "./tmp/*" -o -path "./dev/*" \) \) | xargs -P $nCores -0 grep -Fs "$sSearch" | tee /home/npr/results.txt &
If you do not want to suppress grep errors, use this:
# Note: Change search string below here.
nCores=$(nproc --all)
read -r -d '' sSearch <<'EOF'
echo $locale["timezone"]." ".$locale['offset'].PHP_EOL;
EOF
find -print0 \( -type f \) -and \( -not \( -type l \) \) -and \( -not \( -path "./proc/*" -o -path "./sys/*" -o -path "./tmp/*" -o -path "./dev/*" \) \) | xargs -P $nCores -0 grep -F "$sSearch" | tee /home/npr/results.txt &
Change EOF
to any other A-Za-z
variable if it's desired to search for the literal text EOF
.
With this, I reduced a day-long search that had thousands of errors resulting from several of the top answers here into an easy sub 1-minute command.
Reference:
Also see these answers:
running bash pipe commands in background with & ampersand
How do I exclude a directory when using `find`? (most answers were wrong and I had to fix it for modern find).
https://unix.stackexchange.com/questions/172481/how-to-quote-arguments-with-xargs
https://unix.stackexchange.com/questions/538631/multi-threaded-find-exec
PDF is not a structured language, but instead a display-oriented format. In fact, it is even better described as a rendering engine programming language.
To render the three words "The lazy fox", the PDF-generating software can choose to instruct either:
The lazy <move to bottom right> 36 <come back to the page> fox
)The nice fox <move back to the start position of "nice"> <draw a white rectangle over the word "nice"> lazy
Thus the ability to extract contents in a structured way from your PDF can vary greatly, depending on what produced the PDF.
Your first mission is to ensure you only have 1 stable source of PDF.
Do not expect to create a general-use "any PDF containing tables-to-JSON".
OK, let's say that you're OK with it, you just have to get the juice of that specific PDF, and once done, you'll trash the whole project never to work on it anymore (no way to "Manu, the engine you gave us in 2025 doesn't work anymore on the 2027 version of the PDF, can you repair it please?").
Your best bet then will be to try tools starting from the simplests.
First try PDF-to-text extractors (like pdf-parse
; but please give an excerpt of its output!),
but don't count on them to output a pretty table;
instead try to find a pattern in the output:
if your output looks like:
col1
col2
col3
col1
col2
col3
pagenumber
col1
col2
col3
then you're good to go with some loops, parsing, detection and steering.
Be warned that you may have some manual iterations to do,
for example if the table's data is hardly distinguishable from the page numbers or headers or footers,
or if the table contains multi-line cells:
col1
col2
second line of col2 that you could mistake for a col3
col3
Then this would be a cycle of "parse PDF to a .txt -> regex to JSON -> verify consistence -> if fail then edit the .txt -> regex to JSON -> verify -> […]".
This would be the most efficient solution,
depending on the kind of guts of your PDF of course.
Level 2 would be to parse the PDF instructions (pdfjs-dist
may be good at it) to detect the "pen moves" between text tokens, and then mentally place it on a map, knowing that buckets at the same ordinate with subsequent abscissas are adjacent words, or cells.
But I'm not sure it's worth the effort, and then you could go to…
In case you need a fully automated workflow that level 1 can't provide (from your specific PDF),
then you could use pdfjs-dist
to render the PDF, pushing the image to table-aware OCR software that would output something more suitable to the "regex to JSON" last step of Level 1.
What worked for me was placing the myPreChatDelegate earlier in the class since its reference is 'weak'. If you put it on the class variables, it stays alive to listen to the delegate successfully.
.mat-mdc-progress-bar { --mdc-linear-progress-active-indicator-color: green; //progress bar color --mdc-linear-progress-track-color: black; // background color }
This is the solution i found for a quick and dirty way for changing it via css.
This Library is deprecated can you suggest how to do this in native three js
The first thing to check is off course if the file exists in the web root and if the file permissions allow the web server user to access it. If that is the case, there also might be some URL rewriting going on that messes things up.
@inject IJSRuntime JS worked for me
I'm late to this party I know, but what about kill -STOP pid, on the screensaver to disable, and kill -CONT pid to resume?
You'll need to create a _layout.tsx
file inside each subdirectory. Inside this _layout.tsx
file, you'll need to define the structure for the navigation:
...
<Stack>
<Stack.Screen name="screen1" />
<Stack.Screen name="screen2" />
...
</Stack>
...
dessertHeaders: [
{ title: '', key: 'data-table-expand' },
{
title: 'Dessert (100g serving)',
align: 'start',
sortable: false,
key: 'name',
},
{ title: 'Calories', key: 'calories' },
{ title: 'Fat (g)', key: 'fat' },
{ title: 'Carbs (g)', key: 'carbs' },
{ title: 'Protein (g)', key: 'protein' },
{ title: 'Iron (%)', key: 'iron' },
],
See Demo
I guess you should add loading state while the component is loading
For example :
const Component = dynamic(() => import(`@/components/admin/tabs/${compName}.js`), {
ssr: false,
loading: () => <div>Loading...</div>
});
Or use:
if (!data) return <div>Loading...</div>;
There is a Microsoft article which details cloning a database.
This can be set as schema only and I believe can be done on versions which predate the 2014 version.
I had a similar problem some time ago and this user helped me solve it. It is the best solution I found, it works fine.
For now, what I did was to use templates.
source: z_full_json # Truncate to 64KB (adjust size as needed) template: '{{ if gt (len .Value) 65536 }}{{ slice .Value 0 65536 }}...TRUNCATED{{ else }}{{ .Value }}{{ end }}'
This way, promtail will truncate any huge json that loki might reject. This way, I am still able to get most of my json label values for each log and only miss a few.
I've gotten an approved business initiated template but I'm failing to send it with the twilio api because it's still flagged as a "free-body" message instead of a template. I've used the correct template SID but the issue persists. does anyone know why this happens
kindly submit the website url to Google Search Console
try using this:
import os
import httpx
from groq import Groq
client = Groq(
api_key=os.environ.get("api_key"),
http_client=httpx.Client(verify=False) # Disable SSL verification
)
I got the same error, it was fixed creating a certificate with the tool, but now i get a different error
Check what John says in this issue if it helps:
https://github.com/electron-userland/electron-windows-store/issues/118
Hope it helps
Hello and welcome to StackOverflow!
The error you're getting is quite clear: The supplied javaHome seems to be invalid
, so you probably just have to update the JAVA_HOME environment variable (if you are using it) or move your JDK to the path your IDE is looking for it (which is also listed in the error you posted, C:\Program Files\Java\jdk-23\bin\java.exe
).
That should be enough to solve the current issue.
Is XML supported in Azure SQL DB? Only 'CSV' | 'PARQUET' | 'DELTA' formats are mentioned.
Install Puppeteer Sharp: Add the NuGet package to your project: dotnet add package PuppeteerSharp
Use Puppeteer Sharp to render the HTML page as a PDF.
Use a library like System.Drawing.Printing to send the PDF to the printer.
I tried to install the last .net version 9 sdk and it worked
I had the same issue.
Only setting up the Anaconda Python interpreter path in VS code didn't work.
Try reinstalling Python extention in VS code - as described in this microsoft vscode issue: https://github.com/microsoft/vscode-docs/issues/3839
For me it solved the problem.
We figured out the problem. It was a matter of slow ceph. When deploying to local volume, recovery from backup is much faster, connection to PG did not have time to reset, it allowed to avoid errors during recovery.
Conclusion: if necessary, increase the time to maintain connection with PG or optimize the speed of your storage.
And if there is no file by that name and snapd still can't be installed? The first answer didn't work at all but the second with zero upvotes worked fine..
im using com.google.gms:google-services:4.3.15 it same error
@echo off
REM Configuration - Customize these variables
set "source_folder=C:\path\to\your\source\folder" REM Replace with the actual source folder path set "destination_share=\server_name\share_name\destination\folder" REM Replace with the shared drive path set "log_file=transfer_log.txt" REM Path to the log file set "file_types=*.txt *.docx *.pdf" REM File types to transfer (e.g., *.txt, *.docx, *.pdf, . for all)
REM Create the log file (overwrite if it exists) echo Transfer started on %DATE% at %TIME% > "%log_file%"
REM Check if the source folder exists if not exist "%source_folder%" ( echo Error: Source folder "%source_folder%" not found. >> "%log_file%" echo Error: Source folder "%source_folder%" not found. pause exit /b 1 )
REM Check if the destination share is accessible (optional but recommended) pushd "%destination_share%" if errorlevel 1 ( echo Error: Destination share "%destination_share%" not accessible. >> "%log_file%" echo Error: Destination share "%destination_share%" not accessible. popd pause exit /b 1 ) popd
REM Transfer files
echo Transferring files from "%source_folder%" to "%destination_share%"... >> "%log_file%" echo Transferring files from "%source_folder%" to "%destination_share%"...
for %%a in (%file_types%) do ( for /r "%source_folder%" %%b in (%%a) do ( echo Copying "%%b" to "%destination_share%"... >> "%log_file%" echo Copying "%%b" to "%destination_share%"... copy "%%b" "%destination_share%" /y REM /y overwrites existing files without prompting if errorlevel 1 ( echo Error copying "%%b". >> "%log_file%" echo Error copying "%%b". ) ) )
echo Transfer complete. >> "%log_file%" echo Transfer complete.
REM Display the log file (optional) notepad "%log_file%"
pause
exit /b 0
why this batch file not working
I'm waiting for this feature request to be implemented. Let's vote for it together!
We experienced a similar problem and eventually discovered that in Azure OpenAI you need to set the Asynchronous Content Filter option. It's buried in the Azure model deployment settings in the Azure AI Foundry portal.
Without that it is essentially internally buffering the streamed response to enable the system to scan and block / flag content before it's returned.
Verify that your onClick handler correctly toggles the state. For example, using useState:
const [isOpen, setIsOpen] = useState(false);
const toggleDropdown = (e) => {
e.stopPropagation(); // Prevents event bubbling
setIsOpen((prev) => !prev);
};
It seems like you are trying to use the OtlpGrpcSpanExporter
. gRPC is currently not supported. Could you try swapping out the OtlpGrpcSpanExporter
for an OtlpHttpSpanExporter
? This would mean data is exported via OTLP HTTP to the Dynatrace endpoint.
Protecting a private blockchain using a public blockchain can be achieved through several techniques that leverage the security, immutability, and decentralization of public networks while maintaining the confidentiality and efficiency of private networks. Here’s how:
🔹 How it works:
Instead of storing private data directly on the public blockchain, you hash the private blockchain’s critical data (blocks, transactions, or state) and record the hash on a public blockchain like Bitcoin or Ethereum. This ensures that if anyone tries to tamper with the private blockchain, the hashes won’t match, proving the integrity of the private chain.
🔹 Example:
A supply chain company runs a private blockchain but stores cryptographic hashes of transactions on Ethereum to prove their authenticity without exposing private data.
🔹 Projects/Protocols:
OpenTimestamps (Bitcoin-based proof of existence). Chainlink’s DECO (privacy-preserving oracle for verification).
🔹 How it works:
A private blockchain can interact with a public blockchain via smart contracts, where only specific verified data is shared. Allows private chains to benefit from public security while keeping sensitive data hidden.
🔹 Example:
A private medical records blockchain validates patient identities via a public blockchain without exposing personal data.
🔹 Projects/Protocols:
Hyperledger Fabric + Ethereum Polkadot’s parachains Cosmos (IBC - Inter-Blockchain Communication protocol)
🔹 How it works:
Instead of revealing private blockchain transactions, a Zero-Knowledge Proof (ZKP) allows verification of data validity without disclosing actual data. Public blockchains can verify private blockchain transactions without exposing details.
🔹 Example:
A private DeFi lending protocol could prove it holds enough collateral on a public blockchain without revealing user details.
🔹 Projects/Protocols:
ZK-SNARKs & ZK-STARKs (used in zkSync, StarkNet, and Aztec Network).
🔹 How it works:
Smart contracts on a public blockchain act as a decentralized notary, certifying transactions or agreements from a private blockchain. Reduces fraud risks by ensuring an immutable proof of existence.
🔹 Example:
A legal firm using a private blockchain for contracts can notarize key details on Ethereum for dispute resolution.
🔹 Projects/Protocols:
Civic (decentralized identity verification). NotaryChain (blockchain-based notarization).
🔹 How it works:
Private blockchains can store encrypted backups or checkpoints on a public blockchain. If a private blockchain is compromised, it can be restored using public blockchain proofs.
🔹 Example:
A private corporate blockchain backs up state changes onto Ethereum every 100 blocks to ensure disaster recovery.
🔹 Projects/Protocols:
Filecoin, Arweave, IPFS (decentralized storage for immutable backups).
1.check the pip were updated correctly 2.Try with conda or python environment to install packages.
//Im facing the same issue
//Now I fixed with using
//sx={{display: 'grid'}}
<DataGrid
rows={rows}
sx={{display: 'grid'}}
columns={columns}
checkboxSelection
onRowSelectionModelChange={(selectionModel) =>
handleRowSelectionChange(selectionModel as GridRowId[])
}
slots={{
toolbar: () => (
<CustomToolbar
exportToPDF={exportToPDF}
/>
),
noRowsOverlay: NoRowsOverlay,
}}
disableRowSelectionOnClick
density='compact'
pagination
paginationModel={paginationModel}
rowCount={rowCount}
paginationMode={paginationMode}
pageSizeOptions={[5, 10, 20, 50]}
onPaginationModelChange={handlePaginationModelChange}
/>
here is an updated query working in version 8:
WITH RankedData AS (
SELECT Klasse, Name, KW, RANK() OVER (PARTITION BY Klasse ORDER BY KW+0 DESC) AS class_rank FROM valar_date ),
NumberedData AS (
SELECT Klasse, Name, KW, class_rank, ROW_NUMBER() OVER (ORDER BY class_rank, KW DESC) AS row_num FROM RankedData )
SELECT CONCAT('group', FLOOR(row_num / 4) + 1) AS Groupname, GROUP_CONCAT(Name ORDER BY row_num SEPARATOR ', ') AS Players FROM NumberedData GROUP BY FLOOR(row_num / 4);
I'm using [email protected] and [email protected] It seems like the issue can be solved by simply changing
from tensorflow.keras.layers import Dense # for example
to
from keras.layers import Dense
What I have found, after fighting for some weeks, is that the error comes because it has not been considered the case when one already has bought shares from a company. So, in the code must be considered that shares should be added and the transactions table must be updated instead of inserting a new raw.
You can find it here:
https://git.yoctoproject.org/meta-lts-mixins/log/?h=scarthgap/rust
Normally I'd suggest using the layer index, https://layers.openembedded.org/ to search for it but it doesn't appear to be listed there.
Looks like this is a problem (bug?) of particular psycopg version 3.2.4. Try to downgrade to 3.2.3. In my case it helped.
Don't use a stack use a list view
You can write data to dynamic destinations (tables) - each table may contin separate schema version, eg: "table_v1", "table_v2", etc. Apache Beam or another procesing engine may be used. Next you can query the data with wildcard https://cloud.google.com/bigquery/docs/querying-wildcard-tables. "BigQuery uses the schema for the most recently created table that matches the wildcard as the schema for the wildcard table." - this could make the job, but you should ensure, that the table with the latest svhema version had been created last.
Try update your material component gradle
'com.google.android.material:material:1.4.0'
I update to this version and solve my problem
In the end, it seems that the answer was a combination of two factors:
The Command Prompt User Interface displays to you because you execute commands interactively. What happens is your application executes its command inside an IIS Express background session because of which everything runs without displaying any output. Running commands through your C# application under IIS Express produces different working directories compared to your command-line operations which affects file path references.
The system behavior of UiPath depends strongly on whether the Assistant stays connected to the internet. When users disconnect their internet after their robot goes online their processes tend to execute without issues. So, try these:
The observed behavior where your command executes properly in CMD represents a difference between these environments since C# fails to display the visible output.
Supply --host="" parameter at cli.
This question is similar to one I answered here: https://stackoverflow.com/a/79410774/18161884.
In my answer, I explain how to set priorities for the camera and joystick to ensure they render correctly. Check it out for a detailed explanation and code examples. Hope this helps! 🚀
Please see below. Endpoint: https://developer.api.autodesk.com/data/v1/projects/:project_id/folders/urn:adsk.wipprod:fs.folder:co.N0nCOWbXSPeOcAz6Rw38tA
{
"jsonapi": {
"version": "1.0"
},
"data": {
"type": "folders",
"id": "urn:adsk.wipprod:fs.folder:co.N0nCOWbXSPeOcAz6Rw38tA",
"relationships": {
"parent": {
"data": {
"type": "folders",
"id": "urn:adsk.wipprod:fs.folder:co.xS2cbhg1T7iy7GKzfPDhQQ"
}
}
}
}
}
Later Later Edit:
Managed to work out some solution with DXL scripting. Created a DXL script that will export the entire document and i was able to run this DXL via command line.
Since the dynamic topic model (DTM) is a probabilistic model, word probabilities are never zero even for words that do not occurr in a time slice. But the DTM has a temporal smoothing parameter that influences the temporal continuity of topics. In the LdaSeqModel()
, it's the chain_variance
parameter. By increasing it, words that do not occurr in a time slice get lower probabilities, also in the toy model given above.