In my Angular 19 project with tailwind 4 and project hierarchy:
I had to add to ".vscode\settings.json":
"tailwindCSS.experimental.configFile": "frontend/webapp/src/styles.scss"
And the restart vscode - then it finally started working.
The "tailwind.config.js" is actually not needed anymore.
Follow this guide to setup tailwind in Angular:
https://tailwindcss.com/docs/installation/framework-guides/angular
What is "saveRefreshToken"? The library takes care of saving session data. Also "importAuthToken" takes in a refresh token parameter. See method
hey i have also been getting the same error , seems like there is something wrong with turborepo , i havent found the solution yet . I used pnpm but yeah got the same error . and this error didnt exist 2-3 months back. if you got any solution please share .
The key you've been using has expired.
Solution: Log in to Firebase and generate a new key.
Ubuntu 16.04
Compare two camera JPEGs (moving detection)
# Command line
compare -metric NCC camLast.jpg camPrev.jpg null: ;echo
0.9974321
# Prog
floatDiff=$( compare -metric NCC camLast.jpg camPrev.jpg null: 2>&1 )
echo $floatDiff
- On two identical pictures, result is 1.
- Results on camera are between 0.997 and 0.998, due to camera noise and jpeg conversion loss.
- Results below 0.997 are true differences, meaning Moving detection on camera view field happened.
See our answer on another post related to generating schemas from dynamic objects - https://stackoverflow.com/a/79521253/5828912
the answer is "C:\Users\HECTOR\Documents.next\server\vendor-chunks/data/Helvetica.afm", has "" :(C:\Users\HECTOR) AND "/" :(chunks/data/Helvetica.afm),
I just upgraded to the latest version 1.2.1.2, and the Binding property has now been removed. Does anyone know how we're supposed to configure a custom binding now? I was using it for TransportWIthMessageCredential security, and SOAP 1.2.
wsHttpBinding.Security.Mode = SecurityMode.TransportWithMessageCredential;
wsHttpBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None;
wsHttpBinding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;
var binding = new CustomBinding(wsHttpBinding);
MessageEncodingBindingElement encodingElement = binding.Elements.Find<MessageEncodingBindingElement>();
encodingElement.MessageVersion = MessageVersion.Soap12WSAddressing10;
Two things:
That error is usually on all the call sites except one that would be done first and cause the Lazy instance to become poisoned. The different one will have the actual error message. In my case
Invalid `cargo metadata` output: Error("EOF while parsing a value", line: 1, column: 0)
When this has happened to me it was because cargo metadata
failed to run. In my case it was in a different workspace where one of my dependencies lived.
If you get the same first error as me
Try running cargo metadata
and see what error it reports
If it doesn't report an error look carefully at which crate first gives the error and try running cargo metadata
in that crate (or at least in the same workspace as the crate).
=Index(Sheet1!A:A;B1)
=Index(Range;number row;number column)
| Sheet2
A | B | C |
---|---|---|
=Index(Sheet1!A:A;B1) | =Index(Sheet1!B:B;B1) | =Index(Sheet1!C:C;B1) |
| Sheet1
A | B | C |
---|---|---|
100 | 200 | 300 |
300 | 400 | 500 |
An image of the function test:enter image description here
It does matter
When I do a search, I set, Look in, to: Entire Solution
Our company has many projects that I do not work on
If the header file is not in the solution, then it won't be found
If I set, Look in, to the absolute path, then it is found
This caused me a lot of lost time
If you have defined some hierarchy, I know how to help. SSAS enforces that children in hierarchy are unique; otherwise, you get an error. What fixed the problem for me was going to the properties of the attribute and changing "keycolumn" property to the primary key, "namecolumn" to the attribute itself.
Sling allows you to register servlets that handle all types of http interactions, so the default Sling servlet's behavior can be overridden at your discretion. If you want a general limit, you can also create filters that run on each post or put request (which are the ones that would be used for uploading) and check the requirements for upload are met.
Similarly to @WarKa's answer, with Pydantic v2:
from typing import Literal, Union, Annotated
from dataclasses import dataclass
from pydantic import RootModel, Field
@dataclass
class DeviceTokenGrant:
grant_type: Literal["urn:ietf:params:oauth:grant-type:device_code"]
client_id: str
device_code: str
@dataclass
class RefreshTokenGrant:
grant_type: Literal["refresh_token"]
refresh_token: str
TokenGrant = RootModel[Annotated[Union[RefreshTokenGrant, DeviceTokenGrant], Field(discriminator="grant_type")]]
async def token(grant: Annotated[TokenGrant, Form()]):
...
Note the use of the discriminator
attribute set on Field
. See https://docs.pydantic.dev/latest/concepts/fields/#discriminator
While on Debug when running the program. At the top of the window go the Debug -> Windows -> Watch, and select watch 1.
Now go to "Add item to watch" and add your Var (in your case "board") .
Now you can expend the item and see all the variables.
Please note that ScorpioBroker (and its temporal API) is not using "fiware-service"and 'fiware-servicepath' headers to separate data sets, but instead it has a concept of "tenants".
To access a specific tenant you should use "NGSILD-Tenant" header in all /ngsi-ld/v1/... requests. If this header is not specified the 'default' tenant name is used.
Each tenant gets a separate database - the tenant database name can be found in the 'tenant' table in main scorpio db (ngb by default).
The temporal entity values are stored in tenants own database in 'temporalentity' and 'temporalentityattrinstance' tables.
Per https://jackhenry.dev/open-api-docs/plugins/architecture/externalapplications/:
"The redirect URI that handles the initial authentication flow for your plugin must appear first in the Redirect URI list since Banno’s Dashboard UI expects to call the first redirect URI to render the plugin’s card face."
I think that's why you are seeing the OAuth flow working fine outside of Banno, but not seeing the OAuth flow working as you would expect for a plugin card inside of Banno.
Meaning, you have http://localhost:3030/cnx/auth-start?tid=75b94b7e-e60d-4cb6-bbac-e85949b4ca0e defined first then http://localhost:3030/cnx/oauth2?tid=75b94b7e-e60d-4cb6-bbac-e85949b4ca0e defined second, but per your question above you said you started the auth flow with the second defined redirect URI, when you really want to start the auth flow with that first defined redirect URI for the plugin's card face.
Yes, in Visual Studio Code (VS Code), you can comment out entire Jupyter notebook cells without manually selecting all the text within each cell. This functionality allows you to quickly disable or enable code across multiple cells. Here's how you can do it:
Select Multiple Cells: Hold down the Ctrl key (or Cmd on macOS) and click on the cells you wish to comment out. Alternatively, click on the first cell, then hold down the Shift key and click on the last cell to select a range of cells.
Toggle Comments: With the desired cells selected, press Ctrl + / (or Cmd + / on macOS). This keyboard shortcut toggles comments for the selected cells, commenting out all lines within them if they are not already commented, or uncommenting them if they are.
For everybody coming here: It is possible now -> https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-custom-domains.html
You need a VPC endpoint, a private API Gateway
I don't have experience in Kong Gateway but I looked into the documentation you posted. Even if there is an EXPOSE instruction in the DOCKERFILE, it does not open any port on your host machine. You need to specify it in the docker run command.
E.G.: docker run -d --rm -p 8000:8000 -p 8443:8443 -p 8001:8001 -p 8444:8444 kong-image
"docker run -it --rm kong-image kong version" is only for testing purposes, it returns the version for kong (you just get the output of the version in the console).
So EXPOSE in Dockerfile is for documentation purposes.
Thank ypu all. I have found
{
"type": "chrome",
"request": "launch",
"name": "Launch Chrome Debugger",
"url": "http://localhost:3000",
"webRoot": "${workspaceRoot}/src",
"sourceMaps": true,
"timeout": 15000,
"trace":"verbose"
}
no port:9222 needed
first check the folder path in your project
example :
D:\EMR> pip install -r requirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
But my project folder form this given below model
D:\EMR\EMR
So I can find the path and enter the path
cd D:\EMR\EMR
And final you can type the pip install command
PS D:\EMR\EMR> pip install -r requirements.txt
It`s working you can try it.
If you right click on the title bar of Git Bash, and then select Options
, you can actually choose between a wide variety of themes, while maintaining the benefits of colorized output.
hi captain did you figure this out ?
I believe the reason is that the gradients are calculated for each epoch for every weight. If one epoch is over, the gradient found for the weights will be there for those weights. If we start the next epoch the gradients will be again calculated for the neurons and if the previous gradient value is already there then it will get added up to the new gradient thus making the result wrong.
EX:
1 epoch: gradient : 2
2 epoch: gradient : 3
if the zero grads are not used then the model will add both gradients and make a move-like. okay, 2+3 is 5 so we should reduce the weight by -5 so that I can reduce the loss function. But the needed gradient is just 3.
Hey, this is my point of view. If I am wrong kindly guide me
The free trial probably does matter; if I had to guess, you do not have enough RAM, and the process is getting killed. Can you add more logging?
Could you reach out to Clerk support with a minimal reproduction of this? We can get this fixed up for you quickly I am sure!
I'm having the same issue as well. Still trying to figure it out. As others have already mentioned, 'npm start' is deleting the the blocks/blocks-manifest.php file. While 'npm run build' works fine and regenerates the manifest file, it's just annoying having to run it every time a file is changed/saved.
I was on an old bundler. Upgrading now
def copy_formatting(openpyxl_sheet, from_col, to_col):
.... magic happens here ....
return openpyxl_she
Python-Phone-Location-Tracker
et
Using Realm with SPM rather than pods seems to eliminate the issue
Interestingly enough, my package was installed correctly, but the Nuget package name <aaa.bbb.ccc.ddd> was not the same as the namespace that I needed to reference <aaa.bbb.xxx.yyy>.
if that's not what does it for you, here's a list of other items I tried:
directory.packages.props -> making sure the correct version is listed
csproj file-> making sure the package is listed there (i prefer not to have the version at all there and pulled only from directory packages props)
csproj file -> checking your target framework, ensuring the nuget package is compatible
nuget manager -> find list of dependency requirements for your specific framework and versions and ensure theyre compatible
looked at the filepath to the .dll and ensured that it existed for our target framework
cleared all nuget cache
tried an earlier version
tried a different PC
restarted PC
restarted visual studio after nuget install
tried hitting 'run' despite the build not working
looking for repos that use the same package and seeing their versions / csproj (what would have helped me was seeing how they reference the class and what their using statement looked like)
According to the boost docs:
binary_traits should be instantiated with either a function taking two parameters, or an adaptable binary function object (i.e., a class derived from std::binary_function or one which provides the same typedefs). (See §20.3.1 in the C++ Standard.
a lambda doesn't meet either of those criteria, so it is not surprising that it doesn't work as-is. You will likely need to change your function to take a std::function (or applying a concept to enforce the boost::binary_traits requirements) instead of an arbitrary callable,
if _links
not used cannot fetch embedded data. I need author data so I am using:
url.com/wp-json/wp/v2/posts?_fields=title,content,_embedded,
_links
&_embed=author
You can store the AutoCloseable
instance and close it explicitly in an @AfterEach
method:
private AutoCloseable closeable;
@BeforeEach
void setUp() {
closeable = MockitoAnnotations.openMocks(this);
}
@AfterEach
void tearDown() throws Exception {
closeable.close();
}
In SqlServer Configuration manager, change the Sql Server LaunchPad and Daemon Launcher components to disabled state and stop both of them. This will resolve the popup issue.
Use https://www.photopea.com/ It's a free browser photoshop that you can upload your svg, make edits (if needed) then export as png.
You can use the Map
module from Core
.
let t: Map.t<string, string> = Map.make()
let add = (key: string, value: string) => {
t->Map.set(key, value)
}
let x = add("foo", "bar")
Console.log(t->Map.get("foo")) // => "bar"
https://rescript-lang.org/docs/manual/v11.0.0/api/core/map#value-set
=Countifs(A:A;"*"& B1 &"*")
A | B | C | |
---|---|---|---|
1 | Hello, my name is John, Hello, I'm John | Hello, people | =countifs(A:A;"*"& B1 &"*") |
2 | Hello, I'm John, Hello, people call me John | Hello, my name is John | =countifs(A:A;"*"& B2 &"*") |
3 | Hello, my name is John | ||
4 | Hello, people |
=COUNTIFs(A:A;"*"& "Hello, people" &"*")
An image of the function test:enter image description here
https://github.com/awslabs/aws-c-iot is not SDK, it's just used as one of the dependencies for C++ IoT SDK and provides some functionality related to IoT Device.
So, answering to your question. If you need C SDK to interact with AWS IoT services, https://github.com/aws/aws-iot-device-sdk-embedded-C looks like the right choice.
As for Yocto recipe. I found some tutorials and third-party recipes, but they're pretty outdated. I believe you'll need to make your own version.
When you attempt to connect to a Bluetooth device, the initial connection process involves establishing a link between your device and the remote device. This link is at the Bluetooth protocol level and does not yet involve specific services or ports. Here’s a breakdown of what happens:
Bluetooth Link Establishment: When you initiate a connection to a Bluetooth device, the Bluetooth stack on your device establishes a physical link with the remote device. This involves exchanging information such as device addresses and supported protocols.
Service Discovery: After the link is established, your device typically performs a service discovery process to identify the services (and their associated ports) that the remote device offers. This is done using the Service Discovery Protocol (SDP).
Port Connection Attempt: Once the services are discovered, your device attempts to connect to the specific port associated with the desired service. If the port is incorrect or the service is not available, this connection attempt will fail.
Disconnection: If the port connection attempt fails, the Bluetooth stack may then disconnect the link, resulting in the temporary "connected" status you observed.
Initial Link Establishment: The Bluetooth manager showed "connected" because the initial link between your device and the remote device was successfully established. This is a lower-level connection that does not yet verify the availability of specific services or ports.
Service Discovery and Port Connection: The connection to the specific port happens after the initial link is established. If the port is incorrect, the connection attempt to that port fails, but this happens after the initial link is already established.
Manager-Specific Behavior: The behavior you observed can also be influenced by the Bluetooth manager or stack implementation on your device. Some managers might show a "connected" status as soon as the initial link is established, even before the service discovery and port connection steps are completed.
The "connected" status you saw is due to the initial Bluetooth link being established successfully. The subsequent disconnection occurred because the specific port connection attempt failed. This behavior is typical of how Bluetooth connections work and is not necessarily specific to your Bluetooth manager. The network itself establishes the link first and then checks for the availability of the specific port or service.
I had the same issue and I just disabled rosetta and installed a new machine. I didn't have to install LIMA.
Just use a href with a mailto like the sample below.
="<a href='mailto:[email protected]'>email us</a> "
You should remove your NVM v1.2.X and reinstall v1.1.12. After that, you could install node 14.19.0 normally.
URI(x).then { "#{'https://' unless it.scheme}#{it}" }
I have the same issue but with GoLand IDE. I executed the following to open a new project:
cd /Applications && ./GoLand.app/Contents/MacOS/goland dontReopenProjects
After that, I went to File > chose Invalidate Caches > ticked all boxes > clicked Invalidate and Restart.
Lastly, open your project back.
You can set editor.suggestOnTriggerCharacters
to false by entering the settings using ctrl+,
.
If that doesn't work, it is probably being overridden by a language specific setting.
The error {"error":"not found"} indicates that your WordPress page isn’t rendering properly, possibly due to:
Permalinks Issue • Go to Settings → Permalinks in your WordPress admin panel. • Click Save Changes (even without making changes) to refresh the permalink structure. • Now try accessing mysite.com/login again.
Page Slug Conflict • Ensure there’s no conflict with the /login slug. WordPress might be clashing with a system page or plugin route. • Try renaming the page slug to something like /custom-login and check again.
Theme Issue • Switch to a default WordPress theme like Twenty Twenty-Four to see if the issue is theme-related. • If this resolves the issue, your theme may have a custom route or filter affecting the /login page.
Plugin Conflict • Although you mentioned disabling/enabling plugins, try these steps: • Deactivate all plugins. • Access the /login page. • If it works, activate plugins one by one to identify the conflicting one.
Page Content Issue • Edit the Login page and ensure it has proper content. Sometimes an empty page or broken shortcode can cause issues.
.htaccess Issue • Go to your WordPress root directory and open the .htaccess file. • Ensure it includes WordPress’s default rules:
RewriteEngine On RewriteBase / RewriteRule ^index.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L]
If missing, add these rules and save the file.
Caching Issue • Clear your website cache (if you’re using a caching plugin). • Also, clear your browser cache or test in Incognito Mode.
Endpoint Conflict
If you’re using WooCommerce or any security plugin, they may override /login. In this case: • Check for settings that define custom login URLs.
Is there a way to select just 1 option in a form?
So I link to a form and it auto-selects option 1 (checkbox).
I managed to create a custom subsampler that works, if you have any suggestions they are welcomed:
#include <torch/torch.h>
#include <optional>
#include <numeric>
#include <vector>
#include <cstring>
class SubsetSampler : public torch::data::samplers::Sampler<std::vector<size_t>> {
private:
std::vector<size_t> indices_; size_t current_;
public:
// Type alias required by the Sampler interface.
using BatchRequestType = std::vector<size_t>;
explicit SubsetSampler(std::vector<size_t> indices)
: indices_(std::move(indices)), current_(0) {}
// Reset the sampler with an optional new size.
// Providing a default argument so that a call with no parameters is allowed.
void reset(std::optional<size_t> new_size = std::nullopt) override {
if (new_size.has_value()) {
if (new_size.value() < indices_.size()) {
indices_.resize(new_size.value());
}
}
current_ = 0;
}
// Returns the next batch.
std::optional<BatchRequestType> next(size_t batch_size) override {
BatchRequestType batch;
while (batch.size() < batch_size && current_ < indices_.size()) {
batch.push_back(indices_[current_++]);
}
if (batch.empty()) {
return std::nullopt;
}
return batch;
}
// Serialize the sampler state.
void save(torch::serialize::OutputArchive& archive) const override {
// Convert indices_ to a tensor for serialization.
torch::Tensor indices_tensor = torch::tensor(
std::vector<int64_t>(indices_.begin(), indices_.end()), torch::kInt64);
torch::Tensor current_tensor = torch::tensor(static_cast<int64_t>(current_), torch::kInt64);
archive.write("indices", indices_tensor);
archive.write("current", current_tensor);
}
// Deserialize the sampler state.
void load(torch::serialize::InputArchive& archive) override {
torch::Tensor indices_tensor, current_tensor;
archive.read("indices", indices_tensor);
archive.read("current", current_tensor);
auto numel = indices_tensor.numel();
std::vector<int64_t> temp(numel);
std::memcpy(temp.data(), indices_tensor.data_ptr<int64_t>(), numel * sizeof(int64_t));
indices_.resize(numel);
for (size_t i = 0; i < numel; ++i) {
indices_[i] = static_cast<size_t>(temp[i]);
}
current_ = static_cast<size_t>(current_tensor.item<int64_t>());
}
};
Can be used during the loading of the dataset like this:
auto train_dataset = torch::data::datasets::MNIST(kDataRoot)
.map(torch::data::transforms::Normalize<>(0.1307, 0.3081))
.map(torch::data::transforms::Stack<>());
const size_t train_dataset_size = train_dataset.size().value();
std::vector<size_t> subset_indices(subset_size);
std::iota(subset_indices.begin(), subset_indices.end(), 0);
SubsetSampler sampler(subset_indices);
auto train_loader = torch::data::make_data_loader(
std::move(train_dataset),
sampler,
torch::data::DataLoaderOptions().batch_size(kTrainBatchSize));
React is using synthetic events, so you won't be able to access them with the normal webapis.
let addTodo = evt => {
ReactEvent.Form.preventDefault(evt)
let formElem = ReactEvent.Form.currentTarget(evt) // type is {..} which is a record
let value = formElem["0"]["value"] // access the values on the record
// do stuff with value
}
Regarding bindings to the webapis, there is an effort to add webapi bindings directly to the language with patterns that work better with ReScript 11+: https://rescript-lang.github.io/experimental-rescript-webapi/
Can you remove the Procfile and redeploy? Railway will automatically build it and run it with gunicorn.
Have you created the project already?
In order to attach a Module to a project, you need to create it first:
npx create-next-app@latest my-app
cd my-app
The package.json is going to be created, along with all other necessary files. Then you can run the commands to install the modules you want.
If the project already exists, make sure to run the command on the root directory (Where the package.json is located). Ex:
cd C:Documents\Next\my-app
npm install tailwindcss @tailwindcss/cli
I used the code JS code editor and it worked for me. Thank you @Rodrigo
The TrnAdd (Transaction Add) API can be used wit ha debit transaction code to increase the balance of an account. I will note that if this is an integral part of your integration (not just done for testing purposes), you will need to consider creating balanced transactions in the core.
Step 1: First TrnAdd request to affect the customer’s account. This affects the Customer’s account and moves money to the applications settlement GL account.
Step 2: Second TrnAdd request to move money from the settlement GL account. Each FI will have a different GL account that is used for their settlement account and will need to be gathered from the FI.
Step 3: Third TrnAdd request to move money into the GL account used for tracking with the TPV application. Each FI will have a different GL account that is used for their settlement account and will need to be gathered from the FI.
simply this way ?
let num = 123456789.12;
console.log(num.toLocaleString('fr-FR')); // 123 456 789,12
I did my research on that, and it looks that it is something only for tables, views can not be created with a specified engine to make it work on clusters (sadly).
Here is an example to do it with the table anyway: https://dev.mysql.com/doc/refman/8.4/en/mysql-cluster-install-example-data.html
=SUMPRODUCT(($B$19:$B$5589=B4),SUBTOTAL(109,OFFSET(M19,ROW($M$19:$M$5589)-ROW(M19),0)))
I have the same issue. both work in excel and not GS. The one above totals $$$, while below counts rows.
ErrorSUMPRODUCT has mismatched range sizes. Expected row count: 5483. column count: 1. Actual row count: 1, column count: 1.
=SUMPRODUCT(SUBTOTAL(3,OFFSET(B19:B5592,ROW(B19:B5592)-MIN(ROW(B19:B5592)),,1,)),N(B19:B5592=H4))
ErrorSUMPRODUCT has mismatched range sizes. Expected row count: 1. column count: 1. Actual row count: 5483, column count: 1.
I have been trying many versions of the formulas and no luck so far for two days. Any help? TY
Since you are using http instead of https, make sur to set K_COOKIE_SECURE to false in shared->config->tce_config.php file
define('K_COOKIE_SECURE', true);
I Just use the following command and I found this useful for me. You can try it.
git config --global user.email "[email protected]"
git config --global user.name "Your Name"
in which component are you using the providers ? try to create layout component specifically for the /client/[id]/onboarding route.
For example:
import { Provider } from '...';
export default function OnboardingLayout({ children }) {
return (
<Provider>
{children}
</Provider>
);
}
...two more consecutive reboots solved the issue. I was unable to determine the root cause.
What tailwind version you are using?
For me this is because I am trying to do torch.zeros((all_actions_mask.shape[0], 1)).bool().to(device_id)
,do every operations to CPU solves this error.
Command adb reverse tcp:3000 tcp:3000
Did you finally fix this error. I am stuck at the same place. Nothing I do is fixing it.
isLoading is indeed returning undefined everytime. To address this, you can use isPending
const { isPending: isUpdating, mutate: updateSettings } = useMutation({})
So react query team has introduced isPending which works exactly the same as isLoading.
The script on this page helped (not copying it here as it requires registration, don't want to take their benefit away from them).
I was able to accomplish this with the following code where I define the popup editer.
<editable mode="popup" template-id="popup-editor">
<editable-window title="Add/Edit Collateral" width="80%" />
</editable>
Hello i had exactly the same problem because i am using WSL your solution worked !
Virtual Environment Activation Guide (Windows and Ubuntu)
This guide provides instructions for activating Python virtual environments in Windows (Command Prompt and PowerShell) and Ubuntu (Bash).
1. Creating a Virtual Environment (Common Step)
Regardless of your operating system, create a virtual environment using the following command:
Bash
python -m venv venv_api
Replace venv_api
with your desired virtual environment name.
2. Activating the Virtual Environment
Windows (Command Prompt - cmd.exe):
Navigate to your project directory:
DOS
cd path\to\your\project\crypto_api
Replace path\to\your\project\crypto_api
with the actual path.
Activate the virtual environment:
DOS
venv_api\Scripts\activate.bat
Windows (PowerShell):
Navigate to your project directory:
PowerShell
cd path\to\your\project\crypto_api
Replace path\to\your\project\crypto_api
with the actual path.
Activate the virtual environment:
PowerShell
.\venv_api\Scripts\activate
Ubuntu (Bash):
Navigate to your project directory:
Bash
cd /path/to/your/project/crypto_api
Replace /path/to/your/project/crypto_api
with the actual path.
Activate the virtual environment:
Bash
source venv_api/bin/activate
Or if already inside the venv_api folder.
Bash
source bin/activate
3. After Activation
Your command prompt will change to indicate the active virtual environment:
(Windows cmd.exe): (venv_api) D:\path\to\your\project\crypto_api>
(Windows PowerShell): (venv_api) PS D:\path\to\your\project\crypto_api>
(Ubuntu): (venv_api) user@hostname:~/path/to/your/project/crypto_api$
4. Deactivating the Virtual Environment (Common Step)
To deactivate the virtual environment, use the following command in all environments:
Bash
deactivate
Troubleshooting (Ubuntu):
"Permission denied" error: If you encounter this error, run:
Bash
chmod +x venv_api/bin/activate
Incorrect path: Always double-check your paths.
This is very good source to learn more about spacings.
However, i have tried to apply the methods shown here for subplots with twin axes, as shown below.
Somehow, it doesnt fit to the entire horizontal spacing of the figure, leaving some empty space.
Does anyone have faced similar issue and know how to solve it?
This is what i have tried:
mosaic = [["A", "A"],
["B", "C"]]
fig = plt.figure(dpi=600)
fig, axs = plt.subplot_mosaic(
mosaic,
layout="constrained",
gridspec_kw={"height_ratios": [1.25, 1],
"width_ratios": [1, 1.5]} # Adjust widths: A = 1, B/C = any vals
)
plt.style.use("dark_background")
plt.suptitle('AQ_20 @ 223K', fontweight = 'bold')
# Titles and labels
axs["B"].set_title("Rocking scan")
axs["B"].set_ylabel("Intensity (a.u.)")
axs["B"].set_xlabel(r"$\Delta_{diffry}$ (deg.)")
axs["C"].set_ylabel("Intensity @ max. (a.u.)")
axs["C"].set_xlabel(r"$t$ (s)")
# Limits
axs["C"].set_xlim(-5, 410)
#axs["C"].set_ylim(1.30, 1.65)
axs["B"].set_xlim(-0.2, 0.2)
#axs["B"].set_ylim(-1, 22)
# axs["A"].set_ylim(2560, 0)
# axs["A"].set_xlim(0, 2160)
axs["A"].set_aspect("auto")
axs["A"].set_title("Local structure @ rocking max")
#axs["A"].axis("off")
axC_twin1 = axs["C"].twinx()
axs["C"].plot([0, 100, 200, 300, 400], [1, 2, 3, 4, 5], label="Primary y-axis", color='blue')
axC_twin1.plot([0, 100, 200, 300, 400], [5, 4, 3, 2, 1], label="Twin y-axis", color='red')
axC_twin2 = axs["C"].twinx()
axC_twin1.plot([0, 100, 200, 300, 400], [2, 2, 2, 2, 2], label="Twin y-axis", color='red')
axC_twin1.set_ylabel("$\Delta_{diffy}$", color='red')
axC_twin2.set_ylabel("FWHM (.deg)", color='green')
axC_twin2.spines['right'].set_position(('outward', 30))
Try adding this to your Info.plist file:
<key>FacebookAdvertiserIDCollectionEnabled</key>
<true />
I am indeed the OP. The answer struck me when I articulated my question here. Thought of posting the answer myself as it might help someone else. I would still be grateful if others add to this answer/point out any mistakes.
The answer is in declaration of hash_t
. It is not a 'variable' pointer to type, but rather an array of type. In C we cannot reassign an array name to point to some different location. The code in question does this in the line hash_t hashTable[] = *hashTable_ptr;
Although one thing that I still don't understand is that, if I modify the function definition to:
void FreeHash(hash_t hashTable_ptr)
and then pass the dereferenced pointer to array while calling the function as:
FreeHash(*srptr->symTable);
Then the code works. If someone can answer that I'll be grateful.
This is the corrected code by the way:
void FreeHash(hash_t* hashTable_ptr) {
varNode *prev, *curr;
/* freeing each entry */
for( int i = 0; i < HASHSIZE; i++ ) {
prev = NULL;
curr = (*hashTable_ptr)[i];
while( curr != NULL ) {
prev = curr;
curr = curr->next;
free(prev);
}
(*hashTable_ptr)[i] = NULL;
}
free(hashTable_ptr);
There is no value set for the below while running locally. Thanks for all the help
os.getenv('MYSQL_PORT')
I need to create an identifier column when importing the file into the database.
Then this is the solution:
This is great and all but I wanted to share a way to know if the user has released the scrollView. That's basically when the user has stopped scrolling (even though the scrollView is still scrolling because of the velocity of the users drag.)
Here we would would need to check when the dragDetails
is null. This is because the user isn't dragging the screen anymore. It isn't the best solution because there may be edge cases that I haven't seen yet but it works.🙌🏽
NotificationListener<ScrollNotification>(
onNotification: (ScrollNotification notification) {
if (notification is ScrollUpdateNotification) {
if (notification.dragDetails == null) {
// User has just released the ScrollView
print('User released the ScrollView');
// Your code to handle release goes here
}
}
return true;
},
child: ListView.builder(
itemCount: 50,
itemBuilder: (context, index) => ListTile(
title: Text('Item $index'),
),
),
)
(P.S, this is my first answer on StackOverflow. go easy on me🙇🏽)
As of Python version 3.12
this is supported using the quoting
value csv.QUOTE_STRINGS
See the documentation here
This code works fine for me:
return ((ResponseStatusException) ex).getStatusCode().equals(HttpStatus.NOT_FOUND);
Hidden imports are not visible to the pyinstaller.
This function implicitly imports modules.
A class named ONNXMiniLM_L6_V2
from one of these modules.
This class also uses importlib.import_module
to import such dependencies as "onnxruntime", "tokenizers", and "tqdm".
So we need to deal with all these imports.
To reproduce this error, we need a minimal project for pyinstaller.
Environment:
Project sructure:
somedir\
env # virtual environment directory
pyinst # directory for pyinstaler files
embedding.py # additional file for --onefile
main.py
Python files:
embedding.py
def embedding_function():
return "Hello"
main.py
import tkinter as tk
import chromadb
from embedding import embedding_function
root = tk.Tk()
label = tk.Label(root, text=embedding_function())
label.pack()
root.mainloop()
Steps to reproduce:
Create the Python files and the pyinst
directory in some directory.
Run the following in cmd.
Volume:\somedir>python -m venv env
Volume:\somedir>env\scripts\activate
(env) Volume:\somedir>python -m pip install chromadb
...
(env) Volume:\somedir>python -m pip install pyinstaller
...
(env) Volume:\somedir>cd pyinst
(env) Volume:\somedir\pyinst>python -m PyInstaller "Volume:\somedir\main.py" --onefile -w
...
Pyinstaller will generate the necessary directories and the main.spec
file.
pyinst\
build # directory
dist # directory, main.exe here
main.spec
When I try to run main.exe
I get the same error NameError: name 'ONNXMiniLM_L6_V2' is not defined
.
At this point, we need to create a hook for chromadb
and edit the spec file to handle hidden imports.
hook-chromadb.py
from PyInstaller.utils.hooks import collect_submodules
# --collect-submodules
sm = collect_submodules('chromadb')
hiddenimports = [*sm]
Edit hiddenimports
(--hidden-import) and hookspath
(--additional-hooks-dir).
# -*- mode: python ; coding: utf-8 -*-
a = Analysis(
['Volume:\\somedir\\main.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=['onnxruntime', 'tokenizers', 'tqdm'],
hookspath=['Volume:\\path to the directory where the hook file is located'],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
optimize=0,
)
pyz = PYZ(a.pure)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.datas,
[],
name='main',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
Run pyinstaller:
(env) Volume:\somedir\pyinst>python -m PyInstaller main.spec --clean
...
Now I can run main.exe
without errors and see the root window.
We do the same thing.
(env) Volume:\somedir\pyinst>python -m PyInstaller "Volume:\somedir\main.py" --onefile -w --collect-submodules chromadb --hidden-import onnxruntime --hidden-import tokenizers --hidden-import tqdm --clean
The generated spec file in this case:
# -*- mode: python ; coding: utf-8 -*-
from PyInstaller.utils.hooks import collect_submodules
hiddenimports = ['onnxruntime', 'tokenizers', 'tqdm']
hiddenimports += collect_submodules('chromadb')
a = Analysis(
['Volume:\\somedir\\main.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=hiddenimports,
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
optimize=0,
)
pyz = PYZ(a.pure)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.datas,
[],
name='main',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
You can do the same for other dependencies if they have hidden imports.
I had to use the following to get it working:
%environment
export TINI_SUBREAPER=true
It's very annoying. I created this script to automate most things (except IP) on MacOS
https://gist.github.com/woutervanwijk/71c9d36cf38544c99f4b5399ca80fea3
It is entirely possible that messages with differing group id's will exist in a single batch.
There are three rules governing the order of messages leaving a FIFO queue, to help understand the processing behavior:
Return the oldest message where no other message with the same MessageGroupId is in flight.
Return as many messages with the same MessageGroupId as possible.
If a message batch is still not full, go back to the first rule. As a result, it’s possible for a single batch to contain messages from multiple MessageGroupIds.
See https://aws.amazon.com/blogs/compute/new-for-aws-lambda-sqs-fifo-as-an-event-source/ for more information.
Thanks for your help. This worked after I changed the source dataset; SQL database. I think there is some weird behavior with Oracle database.
The preceding copy activity was against an Oracle database so I kept the log activity against the same source. When I changed the source to a SQL database it worked
On my side i had this trouble and was able to solve it by using a lamda expression to call my method like this.
vscode.workspace.onDidCloseTextDocument((x)=>this.extensionCloseDocument(x));
I would try tying it to onBlur which will trigger the input value to be saved to state when the input box loses focus.
This question is exactly something I'm thinking about but I want to go a step further. Once we have all the "raw data" indices of the incorrectly labeled data: what do we do? What sort of things can be done to analyze WHY something got labeled incorrectly?
There is LOTS of information on how to score the model performance, but what is the next level of troubleshooting? How do we start to analyze WHY things are being mislabeled?
I got this error, too. I, also, did have a button that was linked to a procedure which was named like the module it was in. After i renamed the module, the error occurred. I had a version before my changes so tried some things.
What finally worked was:
After the error already occured:
Before renaming the module:
I hope this helps!
Recently I found how to disable suggestion list in C# and F# completely even after special characters like "." are typed. This is possible in Visual Studio 2022.
There is a checkbox in Options. Path to the checkbox: Options -> Text Editor -> All languages -> Auto list members. Uncheck this checkbox and after that completion list will not popup after "." automatically.
If you want to disable completion list to popup automatically after part of the statement already input in C#/F# you need to uncheck another checkbox in options. Path To the checkbox: Options -> Text Editor -> C# (F#) -> Show completion list after a chracter is typed.
This also happened to me, however for me it was because I had moved my overrides(which I saved on my desktop) folder earlier that day to clean up my desktop. Moved my folder back and it worked again.
If you still looking for the solution,
adding secretmanager.googleapis.com
to both no_proxy
and NO_PROXY
does work.
make sure to source it.
If you have done it and its not working in you IDE, then make sure you kill the IDE and restart IDE again. This gets a new session for your IDE with updated env.
I proposed an answer for a very similar question here: https://stackoverflow.com/a/79520750/5552507
It relies on plotly without html.
The semi-colon terminates the inner block, telling the parser that whatever comes next belongs to the outer block.
How did you install the dotnet SDK? and how did you start your dotnet project?
In case anyone stumbles upon something similar in C#, here is the syntax for matching all kind of dashes: \p{Pd}
Well I realized yesterday the stupid thing I did that was causing a lot of my confusion. I thought I was supposed to create a page at /saml/acs to handle the response from the idP. Once I renamed that page to something else, the HttpModule handled everything for me and parsed/validated the response. It also authenticates the user using "Federated" cookie authentication, which I am not familiar with.
So now my question is, is there some way for me to simply get notified that the Saml validation was successful and let me handle the authentication using the normal ASP.NET "Forms" authentication? Basically I just need to look at the NameID coming from the Saml packet and use that to look up the corresponding user in my database and authenticate them.
All you have to do is:
I think most of the answers provided here are not working with Pandas 2.2.3
and Instead of saving the Series as CSV and loading back, I have saved the pandas object as pickle using df.to_pickle()
and read it back using pd.read_pickle()
.
If your goal is to migrate from WSO2 IS 5.7.0 to 7.0.0, and you're using the same user store, you will need to migrate the existing database schema from the old version (5.7.0) to the new schema used by 7.0.0.